Pectra Pages

This post is a compilation of perspectives from +40 core Ethereum contributors (client developers, researchers, and coordinators from 18 different teams) looking back on the Pectra upgrade process, and ahead to what’s next. We hope you enjoy the sentiment snapshots from this time - thank you for reading and being part of the Ethereum story!

Stateful Works creates cultural artifacts for the Ethereum community - see previous installments of this series: Beacon Book (July 2021), Merge Manual (Sept. 2022), and Dencun Diaries (March 2024).

  • Pectra Overview

  • Prompts + Summary Responses

  • +40 Individual Responses

Pectra Overview 🦒🦒🦒

by Nixo

The Pectra upgrade went live on May 7th, 2025 at 10:05:11 UTC. Between both the consensus and execution layers, this was the 18th fork in Ethereum’s history and the 3rd since Ethereum moved to Proof of Stake. At the Kenya interop event, core contributors chose a giraffe as the upgrade’s unofficial mascot. It’s the largest by number of EIPs, with 10 core EIPs, 1 informational EIP, and 1 networking EIP.

The narrative of previous upgrades were straightforward. The Merge: move to Proof of Stake. Shapella: enable withdrawals. Dencun: deliver blobs. Pectra was the first post-Merge fork without a clear headlining feature. As a result, a wide array of EIPs were proposed for Pectra’s first iteration, before two of the largest features were split out into Fusaka (the fork to follow Pectra).

This isn’t to say that none of the features were headliner-worthy - just that with disagreement about the most critical thing to land next, the fork lost the unified vision that the previous three had benefitted from. Additional wrinkles include the rapid pace of research development and the fact that planning began as early as November 2023. Few contributors had both a broad understanding of the entire scope and intimate knowledge of each EIP. As you will see in the responses below, the number of features made testing and finding bugs more difficult. The EthPandaOps team published a bird’s-eye blog post in May 2024, which suggested different ways to split up the fork to reduce this burden.

The challenges culminated with the Pectra activation on Holešky testnet (and then Sepolia testnet). A simple configuration issue specific to testnets threw the network into disarray. Testnet upgrades had gone poorly before, but it was the lead-up context made these feel especially difficult. On the plus-side, the failure set off an incident response to recover a network from a very broken state. The coordination was a useful test of existing capabilities, but also made it clear where we need better procedures for such an emergency.

Pectra felt like a make-or-break hurdle: with such a difficult and drawn-out upgrade, the weeks and hours leading to mainnet activation were tense. Fortunately, the upgrade came without any issues. Tired and happy on the other side, contributors overwhelmingly came out of it expressing relief and pride that they had successfully brought new features to the Ethereum ecosystem. Core contributors are already making use of the learnings with process improvements, and new tooling. Having encountered and resolved challenges, we have a higher caliber of experienced Ethereum core contributors. Binji summed it up well:

“[Core contributors] coordinate hundreds of researchers, auditors, and client teams across time zones, cultures, and philosophies, yet ship like a single mind. They do it all in public, with every decision dissected by the loudest peanut gallery on the internet, and still keep the vibe collaborative”.

Prompts + Summary Responses

Responses were collected in April and May 2025, both before and after the upgrade. Respondents may have not answered all prompts. While all core contributors were invited, only a subset submitted to be included. If you intended to submit but were not able to before publishing, please reach out to Trent or Nixo. Perspectives should be taken as the opinion of the individual, not necessarily that of all core contributors or their respective organizations. Below, you will find each prompt, along with a summary of all 40+ responses in italics.

a. Pectra successes: What has the core development process handled well? Consider timelines, scoping, dividing the fork, testnet incidents, coordinating on ACD, etc.

The core development process was largely successful in the final mainnet launch. Many respondents highlighted the impressive collaboration and coordination between client teams, especially during the challenging testnet incidents like Holesky and Sepolia, and the swift responses to those issues were seen as positive.

b. Pectra improvements: What should the core protocol community have done better?

A significant point of consensus was that the scoping of Pectra was too ambitious, leading to complexity, delays, and increased engineering effort. Many suggested the community should focus on smaller, more targeted forks in the future and improve the process of estimating EIP implementation effort and impact.

c. Pectra contributions: Do you have any contributions that you (or your team) are particularly proud of?

Contributions varied, but notable mentions included work on Max Effective Balance (Max EB) for reducing network load, performance optimizations in clients, and the development of testing and analysis tools like Assertoor and Contributoor, which significantly aided the upgrade process.

d. Pectra challenges: If you worked on previous forks, how did this fork's testnet difficulties compare to previous fork testnet experiences? What were the most challenging bits of Pectra testnets? e.g. technical complexities, social coordination, unexpected timelines, difficult bugs

The Pectra testnets were widely considered more difficult than previous forks due to the sheer number of EIPs and unexpected bugs, particularly the non-finality issues on Holesky. The coordination overhead and the feeling of a lack of a central focus compared to previous upgrades were also noted as challenges.

e. Pectra kudos: Is there someone whose work you particularly appreciated, or you feel went above and beyond? What did you specifically appreciate about their contribution?

Many individuals and teams were recognized, with frequent shout-outs to the EthPandaOps team for their tireless testing and devnet support, and to the EF testing and security teams for their crucial bug finding and coordination efforts. Specific developers who went above and beyond on challenging EIPs or during testnet incidents were also appreciated.

f. Pectra 1-3 words: In three words or fewer, how do you feel about Pectra being so close to complete?

Common sentiments included "relieved," "challenging," "anxious," "excited," and a forward-looking desire for the next fork.

g. future priorities: Which protocol improvements should be prioritized in the near, medium, and/or long term? Why?

Scaling, both L1 and L2, was overwhelmingly identified as the top priority for the near, medium, and long term to ensure Ethereum's continued growth and usability. Improving user and developer experience (UX/DevEx), exploring in-protocol privacy, and enhancing censorship resistance were also frequently mentioned.

h. future improvements: What should the core protocol community do to improve our processes in bringing future upgrades to mainnet?

To improve future upgrades, respondents suggested earlier and more decisive scoping of forks, better estimation of EIP complexity, more efficient and inclusive ACD processes with clearer expectations and timelines, and increased testing and collaboration, including earlier devnets and more community involvement.

Individual Responses

1. Age

  • Client Implementer

  • Lighthouse

a. Pectra successes: Although the holesky testnet didn't survive, the effort in its revival and quick response was pretty epic.

b. Pectra improvements: As always its difficult to decide what should go into what fork. It's hard to estimate engineering effort and impact of a lot of features. This is a continual learning process but I think the community is improving on it. We as a team (Lighthouse) are also making changes on how we communicate our team's perspective on what should be included or not.

c. Pectra contributions: I'm personally a big supporter of Max Effective Balance (Max EB). I think its an under-rated feature and it almost didn't get included. It has the potential to significantly reduce load on all the clients and the network as a whole. We have also been adding the ability to consolidate validators into our Lighthouse UI (Siren), which we hope will make life much easier for users managing validators.

d. Pectra challenges: The elephant in the room here is that we broke a testnet. The revival attempt was similar to the Medalla testnet which also had a very high degree of forking. Having a large scale testnet fork into incorrect chains is immeasurably helpful. It allows us to test our client under periods of non-finality and a high-degree of forking which allows us to find bugs and optimizations making the client more robust in adverse conditions. We made some significant improvements in Lighthouse because of this.

e. Pectra kudos: Lion and Mark from the Lighthouse team, who worked on MaxEB early on and were early support for its inclusion in Pectra. Woooh!

f. Pectra 1-3 words: When Next Fork

g. future priorities: I think L1 scaling. Without L1 scaling, it seems the Ethereum core chain is more and more becoming just the beacon chain and validator set. I'd like to see the core chain being used more than just a temporary data store. Also, in-protocol privacy.

2. Ameziane

  • Client implementer - performance focused

  • Besu

a. Pectra successes: What I found interesting was how core devs handled the Sepolia and Holesky issues. I noticed some negative reactions online, but honestly, I saw them as good examples of why testnets exist in the first place. It’s normal to run into problems during a network upgrade — especially on testnets — and it was reassuring to see how quickly teams reacted and coordinated. There was solid communication between client teams, and the situation was handled professionally through ACD calls and testing discussions. In the end, these incidents helped make the Pectra upgrade process stronger, and that’s exactly what testnets are for.

b. Pectra improvements: I would say the timeline for shipping Pectra could have been handled a bit better. Originally, things seemed on track, but after the testnet incident, the roles shifted and core devs had to spend more time on testing and coordination. The creation of the new testnet, Hoodi, was necessary and valuable, but it added extra work and delayed the fork slightly. That said, I understand the decision — it's better to take more time and ensure everything is solid than to rush a mainnet upgrade. Still, a bit more anticipation and buffer in the planning could have helped manage expectations and reduce pressure on the teams.

c. Pectra contributions: My contributions are mainly focused on performance, particularly around block processing. While my role isn't in feature implementation, I work to ensure that new upgrades don’t introduce regressions that could impact the client’s efficiency. In the context of Pectra, I identified a performance regression introduced during the implementation of EIP-7702. This issue would have degraded block processing performance if it had gone unnoticed, but we were able to fix it ahead of the upgrade, ensuring a smoother transition when Pectra goes live. It’s a small but meaningful contribution that reflects the importance of having performance checks as part of the upgrade process.

d. Pectra challenges: Compared to previous forks, the Pectra testnets were definitely more challenging. The issues on Sepolia and Holesky created a lot of coordination overhead and pushed core devs to slow down and focus more on testing. One of the biggest challenges was the unexpected shift in roles — client teams had to take on more responsibility to stabilizing Holesky, and it required a lot of fast debugging and collaboration across the ecosystem.

e. Pectra kudos: Big shout out to Tim for taking on the coordination work — it definitely wasn’t easy with Pectra, especially while also handling Fusaka scoping at the same time. I also want to recognize the PandaOps team and the testing team for all the effort they put into testing and the tooling they’ve built after each issue. Their quick responses and improvements after every testnet incident really helped stabilize the process and kept things moving forward.

f. Pectra 1-3 words: Confident and proud

g. future priorities: Scaling should remain a top priority in the near and medium term, Ethereum needs to support throughput that matches or exceeds what’s currently possible on L2s. At the same time, we should review underpriced precompiles like modexp, which can impact performance under load and are often overlooked. Increasing the gas limit is another lever worth exploring, but it must be paired with improvements that make the network safer and more resilient at higher limits.

h. future improvements: Give core devs more time and space to review, align, and contribute to major decisions, especially around fork scope, timing, and testing.

3. Andrew Davis

  • DevOps

  • EthPandaOps

a. Pectra successes: The sheer amount of work, testing and coordination gone into this fork is really inspiring.

b. Pectra improvements: This fork was obviously a bit of a behemoth, so scoping for future forks should take priority. Testnet incidents could have been handled a bit better on the communication front.

c. Pectra contributions: Matt working on Contributoor for collecting node data for community analysis. Sam for great analysis for bumping blob counts.

d. Pectra challenges: The previous forks had a central focus, while Pectra felt the opposite. I think there is something in focusing a fork on a feature that everyone gets behind.

f. Pectra 1-3 words: sendit

g. future priorities: L1 & L2 scaling

h. future improvements: Smaller focused forks to hopefully bring down the time between forks.

4. Barnabas

  • Client tester

  • EthPandaOps

a. Pectra successes: Sepolia testnet incident response was super fast and well handled.

b. Pectra improvements: Scoping out EIPs and decide how long it takes for each EIP to be implemented needs to be improved significantly. Only consider EIPs that have rough prototypes already proven to be working is a huge step forward in my opinion. Coordinating testnet recovery on holesky was pretty bad, could be improved significantly.

c. Pectra contributions: Assertoor was by far the biggest assist for us during the different testing phases.

d. Pectra challenges: Significantly harder than shapella (for obvious reasons) but due to sheer amount of EIPs in pectra, it was more difficult than shipping dencun.

e. Pectra kudos: Too many to include, I enjoyed working with all the different client teams.

f. Pectra 1-3 words: relieved but anxious

g. future priorities: UX above everything else. Whatever makes L2 interop better between each other, that helps to create a better user experience for everyone using Ethereum.

h. future improvements: ACD calls should be merged. No longer segregated for ACDE/ACDC. We should have decision makers from both sides show up every week, that way we wouldn't need to sometimes wait 2+ weeks in making decisions that touches both EL and CL world.

5. Barnabé Monnot

  • Researcher

  • EF Research

a. Pectra successes: Pectra is one of the largest forks, and as such there are lessons learned, both good and bad :) At some point it wasn't clear if we would get twice the blob target in Pectra, there was a coordinated push to make it happen and though 6 blob-target is still a long way from where we hope to be over the next months and years, it was a good sign of commitment and ambition to reach for the higher number

b. Pectra improvements: Scoping is always hard, and Pectra scope issues feel to me like they were downstream of a larger lack of strategy and focus. I hope we are better equipped today with both short-term plans and long-term visions

c. Pectra contributions: Mike Neuder did most of the work there, but I was happy to contribute to this slashing analysis slashing analysis, it allowed me to revisit the schedule of penalties and think more critically about the degrees of freedom we had and the true objectives of these penalties were.

e. Pectra kudos: I will shout out Mike Neuder, whose energy on the first inclusion list proposal (EIP-7547) almost made it happen for Pectra. Then it didn't, and we learned a lot from the reasons why it didn't in the design of FOCIL (EIP-7805), so 7547 walked so 7805 could run. Thanks Mike :)

Another shoutout goes to Toni and his analyses on gas, blobs and reorgs. Outside of forks, we can have gas limit increases, and his data helped the recent bump to 18M gas target, as well as the push to target 6 blobs.

Lastly, the whole PandaOps team deserves once again a shout-out, they keep improving and raising our level of confidence in every fork.

f. Pectra 1-3 words: Relieved, excited, ready

g. future priorities: Near-term: Scaling L1 and blobs. Medium-term: Streamlined consensus with 3SF and shorter slot times. Long-term: Reviewing block construction primitives such as FOCIL, MCP, APS etc.

h. future improvements: We can improve scoping by bridging more of the gap between research, integration and development. This is our hope for the new Protocol research call, opening and accelerating conversations before the last moment, so we have better pipelining between fork N+1 and N+2

6. Daniel Lehrner

  • Client implementer

  • Besu

a. Pectra successes: I don't think it was handled very efficiently. We had a lot of late spec changes (e.g. 7702) or some very late additions (e.g. adding blob config to the genesis file). Scoping was very fuzzy because of that. Holesky and Sepolia having different deposit contracts than mainnet also was a very unfortunate situation.

b. Pectra improvements: I think we made a lot of progress in the last few months. Scoping seems to work better, but let's see how this will work out until we ship the fork. Even though EOF has shown that scoping still needs to improve. Focusing mainly on one single EIP, like PeerDAS seems to be a good idea going forward.

c. Pectra contributions: I think we are very proud of havinng worked together very well as a team, both within Besu and also all the core devs together to ship a successful hardfork

d. Pectra challenges: Holesky and Sepolia not matching Mainnet is a difficult situation. Both testnets having issues while doing the hard fork has slowed us down a lot and at least to outsiders has reduced confidence in our ability to ship hardforks safely. The successful mainnet fork should have fixed some of that though.

e. Pectra kudos: Lightclient did go above and beyond to ship 7702. He first pushed relentlessly for 3074 and afterwards worked very hard on getting the 7702 spec in its best possible form

f. Pectra 1-3 words: Relieved

g. future priorities: I think UX and DevEx mprovements are the most important right now. This should include lower slot times as well as continuing to improve the EVM, especially with EOF being removed from Fusaka

h. future improvements: I think we need to include other stakeholders, especially smart contract developers in our scoping process. Also collecting early feedback should be more prioritized.

7. Eitan Seri-Levi

  • Client Implementer

  • Lighthouse

a. Pectra successes: I'm glad we agreed to move PeerDAS and EOF to a future fork. It helped narrow the fork scope and allowed us to ship the upgrade much faster. Without that decision, Pectra might not even be on testnets yet!

b. Pectra improvements: I think we were off on timelines and scoping. A good example of scoping issues was the Attestation EIP. Spec wise we were just moving the committee index out of Attestation Data and into the parent Attestation type. It seemed simple on the surface level. But it quickly become a very involved implementation. And after that work was completed we introduced another change to Attestations via SingleAttestation. Though SingleAttestation was in a vacuum a good change that helped protect against a DOS vector, it would have been nice to have included it as part of the original Attestation EIP work.

Timeline wise we missed the mark with the Pectra upgrade. Some of that had to do with last minute changes, and others with the Holesky non-finality incident. We have talked many times of shipping smaller forks faster and I hope we stick to that ethos at least short term. I also think the community in general wants more frequent upgrades to the protocol.

c. Pectra contributions: I think I was most proud of how our team, and other client teams, responded to the Holesky non-finality incident. Even though it was "just" a testnet, we used it as an opportunity to simulate a real war room like scenario. Teams put in a lot of extra hours to rescue the network and many client implementations, including our own, added important features/optimizations that could help us better rescue the network during future non finality incidents. Though it was disruptive to client teams and the community as a whole, I hope that experience can help client teams build better and more resilient software.

e. Pectra kudos: I'm very thankful for the Lighthouse team for providing me mentorship and opportunity to grow. I'm lucky I get to work closely with such a talented team of engineers, its a pleasure and a privilege to be able to call them my coworkers. I'm also really glad I got to meet engineers from other client teams at Kenya. Being able to work closely with them and get to know them on a personal level was a very rewarding experience.

f. Pectra 1-3 words: It's about time!

g. future priorities: PeerDAS in the short term. We've already put a lot of effort into this feature and it should be ready to be shipped soon. It's a very impactful upgrade and can help scale the network. Immediately after, or concurrently with PeerDAS we should also increase the blob limit to further scale the network. In the long term I believe endeavors like the Beam Chain can help push us to deliver bigger more substantial changes to the network. We just need to make sure it doesn't stop us from continually shipping things in the short to medium term.

h. future improvements: Smaller scoped forks at a more frequent cadence, more non-finality network testing/tooling, Continued yearly interop events

8. ethDreamer

  • client implementer

  • Lighthouse

a. Pectra successes: This fork was the first time in several years where it wasn't obvious what should be our main focus. I feel like this caught us all by surprise. So the mistake wasn't made necessarily during the fork, it was made in not trying to answer this question earlier. We just weren't ready when it came time to make a decision. So I think we did alright for what we came up with on short notice.

b. Pectra improvements: We should've started thinking about what the next upgrades should be much earlier.

c. Pectra contributions: Well.. I was a major voice in getting Max EB into the fork (though most of the credit for readiness of the proposal belongs with others). In consensus layer call 128, Max EB was "on ice" unless it got "rekindled very soon"

Upon hearing this I was concerned as I considered it to be the most important EIP considered for the fork. So I jumped into working on it and rallying behind the scenes and then advocated for it in consensus layer call 129

Then I hosted some breakout rooms to get some of the other team's thoughts. And when consensus layer call 130 came up, Max EB beat out inclusion lists

I then hosted a bunch of the early Max EB breakout calls and was also primarily responsible for the decision to make consolidations EL initiated:

d. Pectra challenges: This fork was wild because we implemented the bulk of the code in 3 weeks in a race before the interop in Kenya. We used a new strategy that made merging the code significantly easier.

e. Pectra kudos: pawan did a bunch of keeping the fork code up to date and bug hunting as it evolved after the initial coding :)

f. Pectra 1-3 words: LFG

g. future priorities: short - PeerDAS & EIP-7917 -> L2 scaling and UX improvements

medium - scale the L1 baby

long - ZKEVM!!!!

h. future improvements: More collaboration between client implementers and researchers. Find our north star. We need to know roughly where we're headed a couple forks in advance like we used to.

9. Francis Li

  • PeerDAS contributor

  • Base

b. Pectra improvements: Would love to see the process to improve and below are my observations and suggestions

Timelines

  1. Overall, from Mar 2024 (Cancun) to May 2025 (Pectra), 14 months for a hard fork is way too long for iterations (and it has a trend to go even longer). This have several major disadvantages: 1. lose out to competition 2. bad for users / builders, iteration too slow 3. non stop scope discussion, which leads to wasted time and ever changing scope

Ideally, I’d love to see Ethereum to have a fixed hard fork cadence of 6 months (2 hard forks a year) or even 3 - 4 hard forks a year, and in order to achieve that, I believe Tim’s Reconfiguring AllCoreDevs could be a good starting point, but I think the most important thing is to commit to a much shorter timeline and deduce back what needs to be done to make it happen.

Scoping

I believe Tim’s Reconfiguring AllCoreDevs could be a great, and I’d like to propose 2 things:

  1. have a rough (ideally accurate) estimate of how much implementation effort from each client team an EIP would need if included. This way we could better estimate how long it takes to ship the hard fork.

  2. put implementation effort as one of the major considerations for including an EIP, it helps with prioritization

coordinating on ACD

I believe this part needs to improve a lot, my observations are:

  1. from my observation it took more than 2-4 ACD call (2 months) for a proposal to get reviewed, which is far too long

  2. There's no clear expectation and deadline for reviewers to officially review and provide feedback, which leads to super super long review cycle, and leads to unnecessary back and forth discussion with the same questions. One example is the cell proof computation EIP for PeerDAS, this small proposal took 6 weeks to be merged, and the relevant spec PRs took even longer

Suggestions

  1. we need to set better expectations and deadlines and hold people accountable, ~2 months to make a simple change to spec should be cut down to 1 week

  2. we need better coordination / forums for async discussions to 1) shorten feedback loop, 2) avoid repetitive discussions

c. Pectra contributions: On more blobs for Pectra

This has sparked interest, discussion and researches about raising blob numbers across the whole community, numerous researches came after this and we’re able to land on 3->6 blob increase in Pectra

e. Pectra kudos: shoutout to nero_eth for his multiple blob related analysis, which made a big difference in convincing the community to raise blobs. shoutout to ethpandaops team for their coordination around hard forks, and the Xatu dataset for confirming solo staker data

g. future priorities: In no particular order

  1. scaling => much higher blob capacity

  2. UX => anything that makes developer & user’s life easier

h. future improvements: fork cadence, 2 or more forks per year

10. Fredrik Svantes

  • Security Researcher

  • Protocol Security (EF)

a. Pectra successes: The collaboration process between everyone is as always truly amazing, and the ability to quickly pause the process when things went wrong rather than too quickly try and push through the upgrade was also great.

b. Pectra improvements: I feel there should have been a smaller and tighter scope for Pectra, and we should have had more plans in place for when things went wrong in the process (such as testnet incidents). The great part is that we have taken those lessons learned and are now much better prepared for future upgrades.

c. Pectra contributions: We discovered a lot of vulnerabilities, organized the bug bounty competition as well as the bug bounty program, and these efforts ended up discovering quite a few security vulnerabilities that could have had severe consequences on mainnet had we not spent time and resources on this during the pre-mainnet phases.

d. Pectra challenges: I felt this one was more complex in the sense that there were more moving parts and the area of changes was also quite wide.

e. Pectra kudos: The forks are very much a team effort across the ecosystem, and I very much believe everyone is equally important when it comes to getting this shipped even though some may be more visible than others in the process. I do however feel that Roman from the Reth team went above and beyond their usual role and stepped up during the Holesky incident to help get that sorted out (as did many others!).

f. Pectra 1-3 words: Big things ahead!

g. future priorities: User security as I believe it's one of the largest problems this ecosystem is facing today in terms of going mainstream. Improved Developer experience as without the developers users won't be attracted. Scaling so that we can provide better experiences for users. Built-in Privacy to ensure privacy of users. Censorship resistance to give everyone equal opportunities

h. future improvements: Smaller scoped forks will likely increase cadence, and it will make it easier to have tighter and more thorough security and testing. I also have high hopes that the new protocol upgrade process will be utilized which I believe will improve the overall security of Ethereum.

11. g11tech

  • contributor to lodestar and ethjs

  • Zeam

a. Pectra successes: Pectra could be the most challenging fork in recent fork history with regards to scoping. The fact remains that there is so much to do and so much we want to see get deployed while tech debt slows us down. This inevitably showed up in the testnet incidences as well as spec bug discoveries reminding us to keep fork scope sharp and focused.

ACD, coordinators, client devs and testing and devops team however kept pulling through which is the testament to the passion and drive of the entire ethereum ecosystem to propel ethereum forward.

b. Pectra improvements: I think everyone here underwent the learning of having focus and scope sharp with renewed importance of keeping the testing surface small/manageable and importance of prioritizing what is the most meaningful and impactful for ethereum to grow as well as agility to respond to the macro needs and environment.

c. Pectra contributions: Co-authored and championed pectra EIP-2935 along with seeding/maintaining its system contract code, contributed to various spec decisions/discussions as well as implementation contributions to both Lodestar and Ethereumjs for pectra. Lodestar demonstrated to be a very robust and dependable client in entire holesky incident and its recovery where I initially was involved in getting the chain started from a non finalized checkpoint. Ethereumjs has similarly stayed ahead in implementing the EL EIPs to provide an impetus to testing and development specially with regards to EELS

Overall I feel that both the client teams moved the development, spec and the testing forward with me having the opportunity to contribute to both.

d. Pectra challenges: We have had a mix of all of the challenging bits of tech complexity, coordination, silly as well as unnoticed spec bugs so pectra testnets have particularly been testing and these challanges proved fatal for holesky even though sepolia survived.

However I feel that this gave the entire dev community a good hands on training on handling issues on a live network as well as a super important lesson in carving out a very focused but contained scope.

e. Pectra kudos: I would like to appreciate ethpandsops team especially Paritosh and Barnabas who with their singular focus of devnet testing kept at it even when testnets like holesky were falling apart. I also believe more of lodestar team NC/Nico rose up to the challenges in this fork while I also focused on other things. On 2935, we had a very good coordination between Guillaume, lightclients and me to resolve issues and make 2935 a better EIP. Jochem in ethereumjs did a good job of moving on EIP testing with testing teams, Andrew's contributions for system contract implementations in ethereumjs and lighclients for overall spec/EIP development especially 7702. Also shoutout to Alex and Tim for doing an impossible job steering ACD.

f. Pectra 1-3 words: Fingers crossed? lol

g. future priorities: imo ZK proofing is the new north star that can deliver to us scaling without compromising on ethereum's values. Native rollups, beam chain are super exciting as new goals and in short term delayed execution, unified binary merkle tree, peerdas, focil are immediate needs.

APS and 3SF also key architectural changes that we need in the protocol

h. future improvements: 6 month fork cadence is important for a more continuous rollout and dev, with meaningful and sharp scope. more cross pollination from different efforts seems to be required and most important of all a long term (although evolving) vision of how we wanna see the protocol.

12. Ignacio Hagopian

  • Protocol engineering

  • Stateless team (EF)

a. Pectra successes: Pulling off one of the most EIP dense forks show how much core developers were able to squeeze coordination even under the current ACD setup.

b. Pectra improvements: The fork started with the intention of being a small fork and ended up being one of the biggest forks in Ethereum. This surfaces how scoping for ACD is still a challenge, and fortunately, it has already triggered deeper reflections on how we can make this better in the future.

c. Pectra contributions: I'm happy to help pushing a long overdue protocol change (EIP-2935) that can make easier future protocol changes.

d. Pectra challenges: Although the Holesky problem was (kind of) a red herring, it think it was an eye opener on how important protocol development focus on resiliency must be, and double-down on general testing efforts -- not only on the testing team shoulders, but all core-developers.

e. Pectra kudos: I want to highlight the EF testing and security teams. Not only theoretically, but also in managing many concrete consensus bugs. I would also add Marius from geth, who always keeps an eye on protocol complexity & security too.

f. Pectra 1-3 words: Happy, somewhat uneasy

g. future priorities: Near/Medium: L1 scaling

Long: L1 snarkification & simplification

h. future improvements: Include more continuous L1 roadmap planning into ACD allowing more efficiency in people working in medium/long term goals.

13. James He

  • client implementer

  • Prysm

a. Pectra successes: I think the best was by eth pandaops testing and devnet coordination. I also thought the interop channel served the purpose of gathering everyone. hosts did a good job taking up the mantle that danny left behind.

b. Pectra improvements: I think something that might be helpful ( because at one point there were so many PR changes) that the PRs would be posted again for visibility when merged in the interop channel or something so we know we need to get those things done by the next devnet date, it was hard to keep track. Also I think EIP selection should have had more review and evaluation on impact. In particular EIP 7549 was particularly hard to implement while the EIP was simple it created so many bugs.

c. Pectra contributions: This was my first time working on EIPs directly contributing at all levels (EIP spec, EIP review, EIP tests, EIP implementation) I'm proud of the work i've done but also grown an appreciation for the skilled researchers and engineers reviewing and finding bugs that I would have not found with my current level of knowledge. I am also proud of my team's leadership during the testnet event and the suggestions provided to attempt to recover the chain.

d. Pectra challenges: I think we had like 5-6 consensus bugs around execution requests, I think there's a lot of gaps in the spec tests especially around state assumptions and how clients interpret when to error vs when to noop, there's a lot of new lessons learned that should be written down around this. Also there's some lessons learned around the gaps between EIP research and implementation especially around EIP-7549, it was the smallest EIP that ended up being the biggest implementation headache. It also resulted in a lot of unforeseen issues such as remembering to have fork guards on beacon apis (now that attestation data has changed but doesn't have its own type) we should remember to specify that or at least audit for it. There may be continued issues around execution requests and attestation changes just like how we are still dealing with certain blob edge cases.

e. Pectra kudos: I wanted to shoutout Radek from our team who stepped up to take on EIP 7549 which ended up by far the most invasive and difficult EIP for us. Terence and Nishant for catching several consensus bugs through the process.

f. Pectra 1-3 words: finally almost done

g. future priorities: I think we should still prioritize scaling, but we should also prioritize eips that provide more safety to the chain, with more and more L2s and chains utilizing ethereum we need to be vigilant on safety and being responsible while scaling

h. future improvements: setting clear expectations on priorities (stack rank the EIPs), consider timing, and betterflows to check if missing anything by a certain date, (check lists sent out or something). The interops also help a lot

14. Justin Traglia

  • Security Researcher

  • Protocol Security (EF)

b. Pectra improvements: We were a bit too ambitious with this upgrade. Compared to previous upgrades, Pectra is very complex. I would argue that it's the largest Consensus Layer upgrade to date, even including Bellatrix ("The Merge"). For reference, in total, Bellatrix specifications are ~1200 lines and Electra specifications are ~2500 lines. I'm very happy with the EIPs that we've included, but we should have been more restrictive from the beginning. Feature creep is real.

e. Pectra kudos: I would like to recognize two individuals:

  1. Fredrik Svantes -- Fredrik does so much behind-the-scenes work to support the Ethereum ecosystem. He manages the bug bounty program, organizes security competitions (which found several high-severity bugs in Pectra code!), leads the security calls with clients, manages all of the security grants, interviews candidates, provides frequent status updates to leadership regarding our our team's work, manages our team's website, creates programs like ETH Rangers, develops useful tools/procedures like How to Multisig and Ethereum Protocol Upgrade Process, and more...

  2. Benedikt Wagner -- Benedikt is a rising star in the Cryptography Research team at the EF. He made many important contributions to the ecosystem during this upgrade cycle. There were several notable improvements to ckzg for PeerDAS; he helped write several excellent research papers: Foundations of Data Availability Sampling, FRIDA: Data Availability Sampling from FRI A Documentation of Ethereum’s PeerDAS; he is writing/presenting on novel ideas such as zkFOCIL; and is now leading the Hash-Based Multi-Signature track of the Beam Chain!

g. future priorities: Short term: scaling. Medium term: scaling. Long term: scaling. But no seriously, scaling blobs/execution is what we need to focus on to ensure Ethereum's dominance. Scaling blobs will allow L2s to keep fees low and expand their ecosystems without limits. Scaling execution (raise the gas limit!) will allow entities operating on L1 to continue doing so in a cost effective manner. To encourage more people to be on Ethereum, it needs to be easy/cheap.

15. Kev

  • Developer

  • Applied Research Group

a. Pectra successes: Testnet incidents, in particular the cryptography bugs were handled well

b. Pectra improvements: Scoping

c. Pectra contributions: We reviewed all of the client implementations of 2537 and refactored into be cleaner and or faster. We also added maxEB

d. Pectra challenges: Probably death by a thousand EIPs; the complexity each small EIP quickly added up

e. Pectra kudos: I think Remy Roy is doing cool work to ensure that pectra works for validators. I don’t know him, so basing this off of the eth R&D channel.

f. Pectra 1-3 words: Fusaka

g. future priorities: Short: FOCIL for censorship resistance.

Medium: Delayed execution to unlock zkVMs

Medium/Long: zkVMs to remove execution from the critical path and allow for lighter nodes

h. future improvements: Set clear goals and identify bottlenecks

16. lightclient

  • core dev

  • Geth

a. Pectra successes: this was one of the most horrific forks we've ever done

b. Pectra improvements: - should have been reasonable about timelines, there was no world where mainnet was live in Feb and I find it discrediting for people to try and claim so in good faith

  • scope made no sense, we filled the fork with a bunch of stuff we didn't really need because PeerDAS and EOF weren't ready

  • all of this culminated into testnet issues, because why wouldn't they? client teams barely test their clients now and so we only have 3-4 people seriously testing the code, of course this was going to blow up eventually

  • coordinating on ACD was fine modulo the unlikely timelines that were pushed. ultimately it was the client teams that made these bad decisions

c. Pectra contributions: At the end of the day, I am proud of EIP-7702. When users finally get their hands on it, it's going to feel obvious, like this is how interacting with the chain should feel like. For a fairly simply change, it was hard to get to this place. I'm proud that we resolved nearly all outstanding concerns with the EIP. The transition from EIP-3074 to EIP-7702 was ACD at its best -- client teams, researchers, application devs, a community members joining forces from many different perspectives to iron out the future for accounts in Ethereum.

d. Pectra challenges: we've never had so many testnet bugs so it's kind of incomparable

e. Pectra kudos: Mario and co with the testing team really stepped into their role firmly this fork

f. Pectra 1-3 words: Ready for takeoff

g. future priorities: short term -- scaling gas limit, history expiry

medium term -- solving state growth, ILs

h. future improvements: Stop doing research. There are enough ideas. Build something - for the sake of our sanity.

17. MarekM

  • client implementer

  • Nethermind

a. Pectra successes: dividing the fork, testnet incidents

b. Pectra improvements: scoping, coordinating on ACD

d. Pectra challenges: social coordination

e. Pectra kudos: EthPandaOps team, EF Testing team

f. Pectra 1-3 words: relieved

g. future priorities: everything related to scaling

h. future improvements: scoping

18. Marius van der Wijden

  • Client Dev

  • Geth

a. Pectra successes: I think the cantina bug bounty program worked pretty well. A few pretty good bugs were found and submitted via the program unfortunately also a lot of noise that needed to be weeded out. The coordination on the execution layer for the Holesky and Sepolia incidents went quite well.

b. Pectra improvements: Timelines were pretty bad, rushed in the beginning, drawn out in the end. Testing and fuzzing was not up to the standard that I expected. Dividing the fork was definitely the right decision, but should've been done sooner. We had way too many incidents, both on the testnets (those I don't care too much about, they were oversights specific to testnets) but also on the static test case side.

d. Pectra challenges: I think we saw a lack of coordination of client teams on the consensus layer for the Holesky incident. Everyone was debugging their own software and knowledge was not being shared with the other teams. In stark contrast the issues on Holesky and Sepolia on the execution layer were handled pretty well, all responsible parties sat in a room and discussed until we had a solution, coordinated timelines on the fix and shipped it.

e. Pectra kudos: I would like to shout out Roman from the Reth team who took on coordinating during the Holesky incident. I would like to shout out Mario, Dan and Spencer of the EF testing team who were very responsive during the issues found by the bug bounty. Andres, Antonio, Kev, Gotti, Fredrik and all others for triaging the bug bounty issues and Justin for doing all the CL testing!

f. Pectra 1-3 words: Glad its over

g. future priorities: L2 scaling through blobs, L1 scaling, security, focil

h. future improvements: More focus on testing from all client teams, not relying too much on the EF funded teams for test cases. Limit the scope of the hardforks early on. Make sure the right people are online for the testnet forks so they can debug if shit hits the fan.

19. Mehdi Aouadi

  • Protocol Engineer

  • Teku

a. Pectra successes: - Great client teams collaboration and coordination as usual

  • A well packed fork with many great features and protocol enhancements

  • Some incidents helped us improve the protocol forking process and the test coverage

b. Pectra improvements: - A better fork scoping: We ended up included too many features

  • An underestimated complexity of some EIPs, especially EIP7549 which looked simple but had many side effects that required many efforts

  • The Holesky incident showed that the protocol tests coverage should be improved (core devs already working on it) and that we could have had a better incident management (new fork scheduling/supervision process already implemented)

c. Pectra contributions: Participated in the implementation of EIP-7549:

d. Pectra challenges: Unexpected complexity and side effects of EIP-7549

e. Pectra kudos: Shoutout to the ethpandaops teams which helped us with great tooling and support

f. Pectra 1-3 words: Grateful, excited and relieved

g. future priorities: - PeerDAS to scale DA

  • ePBS to solve the block building capturing issue

  • FOCIL to solve censorship

h. future improvements: Definitely a better scoping which leads to a better fork cadence

20. Mikhail

  • Research Engineer

  • TXRX

a. Pectra successes: We were good at optimistically adding new features to Pectra. There were a lot of fruitful technical discussions with responsive and pro-active client devs audience on how to address particular problems that arose during the design and engineering phases.

b. Pectra improvements: We should have been more thoughtful on the impact of particular features introduce in Pectra. The impact must have been analyzed early into the process to remove unnecessary strain on the engineering and testing. There were a plenty of gaps in our test coverage because of a big size of the changes introduced into already working parts of clients and new features like execution requests.

d. Pectra challenges: This was the most difficult hardfork since the Merge

e. Pectra kudos: A huge shout out to the ethpandaops team who worked 24/7 in helping fixing broken testnets and launching new devnets and testnets with a tremendous speed!

f. Pectra 1-3 words: Grateful, no words

g. future priorities: We should be more cautious about spec testing and potentially introduce new testing techniques in attempt to catch edge case bugs before testnet forks.

h. future improvements: Future changes must come with a thorough impact analysis before getting considered for inclusion.

21. Nazar Hussain

  • Protocol Engineer

  • Lodestar

a. Pectra successes: Cross team coordination during the testing was amazingly helpful to speedup the debugging on testnet. Stay purposeful on few tighter bends along the process which caused a log of noise but was needed to steer all the teams.

b. Pectra improvements: Can better estimate the complexity of the EIPs before considering those for inclusion to avoid any last minute change of plans.

d. Pectra challenges: We previously used disposable devnets and things works smooth in testnets. Not this time considering the holesky incident where all teams have to act fast to make some conclusion for recovery. So that was difficult but a good learning point for future forks.

e. Pectra kudos: @Nico from Lodestar to be involved deep during the Holesky incident

f. Pectra 1-3 words: Nervous and relieved

g. future priorities: Scalability should remain the highest priority for next couple of forks until the L2 bubble burst.

h. future improvements: More open discussion over EIPs inclusion and their impact on development teams. More devnets focusing on testing smaller feature.

22. NC

  • Client implementer

  • Lodestar

a. Pectra successes: The rescue effort made by the community on Holesky incident was very impressive

b. Pectra improvements: Need to assess the complexity of an EIP better in the initial planning. Looking at EIP-7549

c. Pectra contributions: Lodestar team has been pretty active in contributing to the discussion of devnets

d. Pectra challenges: Technical complexities

f. Pectra 1-3 words: Good, peerdas next

g. future priorities: Scalability is one that is obvious. Personally I am interested in censorship resistance (focil). Would love to see EIPs that simplify the protocol instead of making it more complex.

h. future improvements: Need more people writing spec tests and host the cantina/attackathlon sooner

23. nflaig

  • dev

  • Lodestar

a. Pectra successes: This was the first hard fork where I was not late to the party and was able to contribute from the beginning. In my opinion it was handled well although the scope exploded a bit, but overall it was not that bad of a process considering the high complexity of the changes in the fork.

For me the highlight was the Holesky incident, even though it was bad that it happened in the first place, the outcome in the end was huge for Ethereum overall as we learned so much and were able to improve our client(s) in many ways due to the ~3 weeks of non-finality. It was also great to see how the community rallied together and was able to rescue Holesky in the end, huge shoutout to everyone that participated.

b. Pectra improvements: Scoping was the main issue with Pectra, we added too many features that had complex interactions which at the beginning many underestimated and EIPs that were assumed to be simple turned to be massive changes (like EIP-7549), but that's a common software engineering problem, it's hard to fully grasp the scope of a feature until you have implemented it.

c. Pectra contributions: In my opinion we did a great job during the Holesky incident specifically as Lodestar was one of the only clients that consistently was able to follow the head of the chain and propose blocks. We published Lodestar Holesky Rescue Retrospective that summarizes our issues and learnings.

d. Pectra challenges: I was not actively involved in the Merge, Shapella was comparatively small, and Dencun changes were mostly implemented before I joined the team so it's hard for me to judge compared to other forks. The most challenging part of Pectra was definitely EIP-7549 which introduced many bugs in Lodestar but overall I think it was worth it to implement as it makes the network more efficient and secure.

e. Pectra kudos: Huge shoutout to the whole team, if I had to pick a single person for this fork it's definitely NC who was the main reason why Pectra went relatively smooth for Lodestar and we were able to ship features in time.

f. Pectra 1-3 words: LFG

g. future priorities: We should prioritize improving censorship resistance and reduce negative effects of MEV on Ethereum, although I understand why features like PeerDAS are being prioritized first, we kinda need to address scaling while keeping the network decentralized, which is a difficult problem to solve, but I am sure we can do it.

h. future improvements: Keep scope of forks smaller

24. Nishant

  • Client Implementer

  • Prysm

a. Pectra successes: Pectra is an example of a fork where the core development process failed so nothing to add.

b. Pectra improvements: Pectra was initially envisioned as a 'short fork' , that would be in mainnet by the end of 2024. However it became our biggest fork yet with a kitchen sink of EIPs. On the consensus side alot of these changes were on very sensitive places which required a lot more rigorous testing. Some EIPs such as 7549 ended up being much bigger in scope than intended which had the knock on effect of having a long time to ship the fork.

d. Pectra challenges: Multiple testnet failures definitely broke the morale of many coredevs. So finally shipping the fork is a relief

e. Pectra kudos: To the whole Prysm Team, they did an amazing job getting this fork over the line.

f. Pectra 1-3 words: Relieved

g. future priorities: PeerDAS and ePBS

h. future improvements: Decide fork priorities in advance before including EIPs.

25. Pari

  • DevOps engineer

  • EthPandaOps

a. Pectra successes: Working on multiple forks and topics at the same time, this wasn't the case even a year or two ago - but now we are working on multiple things at the same time.

b. Pectra improvements: We need to get far better at deciding scope early and decisively, similarly, we need to be more pragmatic about what EIPs go in - especially when we consider an EIP ready to go in.

c. Pectra contributions: This is the first time that we had automated devnet tests for a fork, this helped catch a lot of bugs early on and allowed devs to perform scenario tests locally. The tool can be found here.

d. Pectra challenges: The unexpected technical challenges and timelines made Pectra especially difficult to test. Re-configuring tasks to work on more things at the same time also took away a decent bit of mental capacity that would otherwise have been spent on testing.

e. Pectra kudos: Lightclient did a great job corralling the EIP-7702 crew! The EF testing team has been amazing with their new EELS/EEST approaches. Jim from Attestant was extremely helpful during every testnet bug or incident.

f. Pectra 1-3 words: Onwards with PeerDAS!

g. future priorities: We need more L1 scaling and significantly better L2 interop and UX.

h. future improvements: Engage more in the ACD process, we're trying to make it clearer when/what decisions would be made - so it would be nice if the community starts getting more involved.

26. Paul Harris

  • Staff Senior Blockchain Protocol Engineer

  • Teku

a. Pectra successes: I think we changed course well, accepting that we'd over-committed.

b. Pectra improvements: I think we needed to be more realistic when accepting what to put in Pectra. Peerdas plus MaxEB was always too much, and we knew that in May but decided that we may as well put Peerdas in if EL was putting EOF in, and in hindsight that was a bad basis of decission.

c. Pectra contributions: I think the Teku team really stepped up in testing, and contribution to the overall development process in Pectra. I'm really proud of our contributions to MaxEB, testing, and our contributions to the core-dev group.

d. Pectra challenges: Kurtosis has completely changed the game for testing. We can now test multi-client networks on our own machines, relatively easily, and relatively often. This is a huge improvement over previous networks where things had to be all manually crafted.

e. Pectra kudos: Ethpandaops are amazing. Their ability to help so many people with so many things, it really can't be overstated how transformative a team like that is to our community. Mikhail is super helpful closer to home. He's always open to discussions about how things work and how they could be improved, and goes above and beyond to try to ensure that we're producing high quality changes.

f. Pectra 1-3 words: Relieved

g. future priorities: Peerdas near-term. I'm not sure we fully appreciated the urgency that blob availability would have a year ago. Medium term, we need to start looking towards pre-confirmations, censorship resistance, reducing block time. Longer term looking at 3SF.

h. future improvements: Delivering more frequently to mainnet (6 months where possible) if we can manage that. I think moving towards async discussions for ACD decision making, or investigating a slight change to the ACD calls so that its more realistic for APAC members if possible. Development streams would give us more confidence that a feature is ready for a delivery pipeline, but does mean that teams need to be able to facilitate parallel work streams, which may be more complicated for smaller teams.

27. Phil Ngo

  • Project Manager

  • Lodestar

a. Pectra successes: Despite the scoping changes and unpredictability, the core development teams remained flexible and adaptable to a consistently changing environment. There were some EIPs which were agreed upon early which helped with spreading out the implementations. We remained realistic about the complexity of the entire fork and agreed to splitting it. We were able to deal with testnet incidents and bugs of all sorts pretty easily with a good feedback loop on ensuring spec tests covered anything that we found. Coordination with EthPandaOps and the tooling they develop for downstream client teams (e.g. Kurtosis) was spectacular and part of the reason why velocity on development was achievable within a year.

b. Pectra improvements: We failed to scope the complexity of some of the EIPs properly, such as EIP-7549. Sounds simple in the specification, but created huge headaches for the implementors themselves. A good way to help CFI or even SFI future EIPs would be to have teams do some rough scoping or even a draft implementation as part of their advocacy for a specific EIP. Sometimes even one client team's implementation could give an idea of work scope for other teams. However, the optimization was enormous and likely makes a huge difference for our scaling efforts in Fusaka and beyond.

Inconsistent perspectives on the fork scope should be settled much earlier. This involves having to make tough decisions and hard calls that may be controversial, but seeking to have better process should be prioritized. It becomes difficult to scope and organize priorities of work if the fork scope is not finalized n+1 ahead of time.

Some of the complaints I hear about fork scoping can be attributed to not looking far ahead enough and also not understanding how specific EIPs are stepping stones to future implementation work as part of a larger multi-fork goal. As an example with EOF, which has snowballed into an exceptionally large Mega EIP, should've never happened if we had incrementally made changes throughout the years. If multiple things need to be done to achieve a goal, we should look at the end goal and how EIP 1 of n contributes to that effort rather than saying it doesn't have enough value on its own to be included. This is how things snowball and become a much larger problem later on. I see this potentially being a problem with other large efforts such as the larger PureETH (Purified web3) initiative.

One of the things we've gotten better with, but still can use improvement is our ability to discover and test edge cases with our specifications/implementations. Having more eyes/coverage on maybe "attacking" or fuzzing would be a great to increase. We've seen positive results come out of NFT-type devnets where environments are more unstable. Holesky was the best stress test we've accidentally done so far.

c. Pectra contributions: We are proud to have made it through Pectra despite the turbulence to getting there.

d. Pectra challenges: I think this fork had quite a few complexities that were different from previous ones that made it very hard to fully test properly or in full isolation to understand risks involved. Some of the bugs that were found could be attributed to sheer luck with certain circumstances aligning to make the problem visible. In one of the testnets, Lodestar was knocked out and three forks appeared where it became apparent that there was a specification bug with no handling for consolidating validators.

There's always some anxiety to finding bugs, but having found some critical ones on some clients very close to the mainnet hard fork made this fork quite frightening compared to previous ones. The attackathons have been useful for finding edge cases and also ensuring that we've thought things through properly.

As previously mentioned, the social coordination/consensus gathering was very difficult but some great lessons and ideas for moving forward came out of it. Scope creep was definitely an issue which impacted our velocity and could also be attributed to the messiness of coordination. We've learnt that even "small changes" can create big problems. Which is generally why its been hard to continue accepting complex changes to the protocol in the current state. For consensus, we have the ability to better course correct via something like Beam Chain rather than doing hacky/half-assed fixes.

e. Pectra kudos: EF DevOps and their tooling, data gathering and testing coordination made a huge difference for us. Not only do they provide the testing for us to run things locally but they've been entrusted to ensure over 10 client teams can work together to interoperable devnets. I still don't understand how they are capable of doing so much for everyone. Without them, it's very easy to see how much slower we would be to ship anything.

f. Pectra 1-3 words: Please don't break

g. future priorities: I think the recent simplified objectives of the EF summarize it at a very high level, but more specifically:

  1. Scale L1: We need to figure out how to scale while reducing complexity and ensuring that we prioritize all the little things that get us to the end picture. It seems clear that Ethereum wants to move towards a ZK friendly future and everything we do should be a stepping stone to making this happen. Ensure that we are capable of raising the gas limits so capacity is there if needed, but the UX needs to be taken into consideration. It's not just technically scaling, but also scaling the tools/experience to allow Ethereum to be the platform of choice for world-changing applications.

  2. Scale blobs: While doing so, we need to make sure we TEST unhappy cases more often and how we go about dealing with specific scenarios where we may lose finality or have multiple forks on a turbulent network. This will be even more essential as many nodes will not have all the information anymore. We need to prioritize failure modes of PeerDAS and not just shipping it once we have something working most of the time.

  3. Improve UX: We need a unified vision for simplicity, interoperability and functionality. The protocol is already too complex and tooling for it has pitfalls that make building high value applications risky. If we want builders to generate value on Ethereum, we need to make it easy for them to understand the protocol. Interoperability is another big one as it's difficult when we're using different stacks that do the same thing. Parts of the protocol do not understand other parts of the protocol. Why we still have RLP rather than fully having SSZ structures should be important to align on. If we can't make ourselves more interoperable, simplicity will not happen. Functionality is making deliberate choices for what outcomes we want to see. How things function with each other on various parts of the ecosystem should be intentional and met with demand from builders. We generally do need input from others outside of Protocol to make this happen.

h. future improvements: I think there is demand for scaling L1, but people have slightly different visions on how to get there. The most important thing is trying to understand our end goals so we can work towards this vision together. EOF is more of a symptom of the overarching problem that is developer experience. Having a DX czar (see EIP-7940) and someone who can advocate for non-core devs about what core devs should be doing is helpful. Where we failed in understanding EOF is that it's a series of problems that have piled up over the years because we've consistently rejected small changes. Had we foreseen or known what the end goal was for the EVM, a series up smaller implementations could've been more appetizing for inclusions over a series of hard forks if we were working towards a clearer end goal.

We need to also be better at listening to others outside of the protocol community as they are technically our "users" or "customers." When non-core developers come to ACD or advocate for changes that would make their lives better, we should take it into greater consideration as dapp/tooling devs are what create exponential value for Ethereum. If we make the lives of our builders easier, they are a force multiplier to creating useful applications that utilize Ethereum in the background.

We also focus a lot on multi-layered hard forks (EL +CL) and not enough on what could be done in isolation. Perhaps if we are better at scoping useful changes that only require one layer to upgrade together, we could technically reach our "2 forks a year" goal.

Let's lock Fusaka ASAP with only what is necessary to not delay PeerDAS. Smaller, more frequent forks and/or more layer-centric forks in parallel is how we can probabilistically increase our shipping velocity for features on protocol.

28. Rafael Matias

  • Test/DevNet Testing and Tooling

  • EthPandaOps

a. Pectra successes: The original Pectra plan turned out to be a bit too ambitious. We packed in more features than we could realistically deliver and underestimated the complexity involved in shipping so many major changes at once. It took us some time to fully acknowledge that. There were no bad intentions—we simply aimed to deliver more. In the end, though, we had to scale back and cut a few features. Looking ahead, I’d love to see smaller features shipped more frequently. I believe we’ve learned a lot from this fork, and I’m confident that future ones will benefit from those lessons.

The Holesky incident was a memorable one—and led to many long nights for a lot of us. Although the bug (an incorrect deposit contract address) was something that could never have occurred on mainnet, it revealed several issues in client implementations during extended periods of unfinality. In hindsight, while the bug was frustrating, it highlighted some deeper issues in client implementations that needed to be addressed. In that sense, I’m actually glad it happened—it clearly showed that we need to improve client resiliency so that clients can better handle these kinds of edge cases. It was encouraging to see the client teams respond quickly and ship fixes. Recovering Holesky was a real achievement and required a great deal of coordination. Shoutout to Phillip, with whom I spent many nights together in online calls trying to make things work again.

Then came Sepolia—another bug, again related to the deposit contract. This time, it involved a different type of contract gated by an ERC20 token. Fortunately, many of us were together at an offsite event and could address the issue collaboratively. Huge thanks to the Geth team for swiftly diagnosing the problem and pushing a fix.

b. Pectra improvements: Reaching "rough" consensus can be quite challenging. I think the Pectra fork taught us a lot, and moving forward, we’ll likely avoid making the same mistake of bundling too much into a single, oversized fork.

We’ve always seen testnets as just that—a place to test. We assumed that if they broke, it wouldn’t be a big deal as long as we could recover them. But this hard fork taught us that many others rely on these testnets for their workflows. So going forward, we should be much more careful not to break them—unless it’s something that’s planned and clearly communicated in advance.

c. Pectra contributions: There's a lot under the ethpandaops github org that has been updates or specifically created for Pectra including metrics/events collection.

d. Pectra challenges: Not sure if the hardest, but for sure the most stressfull situation was for sure the coordination effort on trying to get back the Holesky testnet. It was also the most rewarding. Unfortunately it became unusable for anyone that wants to test validators due to the amount of slashed validators that need to be exited. But for that, we now have Hoodi :)

e. Pectra kudos: Kudos to Mario, Dan and Spencer. The EF Testing team deserve an applause for what they do. I also enjoyed helping them getting a new UI for Hive :)

f. Pectra 1-3 words: Let's ship Fusaka!

g. future priorities: Right now, my focus is more short-term. Let’s prioritize PeerDAS for blob scaling, work on improving client resiliency, and also start exploring some of the ongoing L1 scaling efforts.

29. ralexstokes

  • Researcher

  • EF Research

a. Pectra successes: I think we did quite well managing the scope of Pectra given its size. This involves coordinating implementation across many clients, testing teams and teams like EthPandaOps.

b. Pectra improvements: We certainly got a bit ahead of ourselves with the size of Pectra. It is one of the biggest hard forks we have had to date, and we should be mindful to not repeat this in the future.

d. Pectra challenges: We had bugs on both the Holesky and Sepolia testnets. Usually testnets are relatively uneventful and speaks to the complexity of the Pectra hard fork.

e. Pectra kudos: Have to give a shoutout to the EthPandaOps team. Amazing work as always.

f. Pectra 1-3 words: relief. optimism. excitement.

g. future priorities: Scaling the execution layer, scaling the blobs, and scaling UX

h. future improvements: The main thing we should do is align on a narrow, concrete set of goals so that we know what to implement when. We have a very powerful resource in core development that can bring great positive change to the world via Ethereum. We have a responsibility to make good on this promise and lack of focus will keep us from getting there.

30. rodiazet

  • EL implementer

  • Ipsilon

a. Pectra successes: scoping, coordination on ACD

b. Pectra improvements: Spec quality, testing,

e. Pectra kudos: Tim Beiko and his coordination of resolving testnets issues

f. Pectra 1-3 words: A bit nervous

g. future priorities: EL scaling

h. future improvements: Tests should be implemented in EELS instead of many different formats (bls tests i.e.), we need benchmark tests to properly analyze i.e. gas repricing impact.

31. s1na

  • Client implementer

  • Geth

a. Pectra successes: Mainnet fork was a success! looking back, I was honestly impressed by how the testnet incidence responses happened. When the Sepolia issue happened, in less than an hour the culprit was identified, patched and communicated to relevant community members.

b. Pectra improvements: What comes up often is scoping. Small EIPs added up and creeped into something big. But I believe scoping is also the trickiest thing given the diverse viewpoints and voices. It will always be a sore point.

c. Pectra contributions: I enjoyed implementing a few of the SFI'ed EIPs and making sure they are compatible with other clients. I also believe my updates to EIP-2935 led to its simplification and reduced area of potential bugs.

d. Pectra challenges: It helped that these were configuration errors, but the challenging aspect for me was the sinking feeling, and the doubt that creeped in about the mainnet launch.

e. Pectra kudos: I would like to give a shout-out to my team which managed the fork process well despite losing 2 senior members midway through. It was challenging but we did it.

f. Pectra 1-3 words: Well done everyone!

g. future priorities: Hands down Censorship Resistance. Also I think state growth and its side-effects should be studied more so we can increase the gas limit with more eagerness.

h. future improvements: Getting everyone under the same roof for a week goes a long way. Kudos to the organizers of previous interops!

32. samcm

  • DevOps

  • EthPandaOps

b. Pectra improvements: There was certainly a lot of learnings throughout this one! Obviously the original scoping and planning was turbulent, but it seems we're taking the right steps to stop that from happening by planning for future forks earlier. The testnet incidents, and the responses to them were definitely rough, but we now have a much better idea of how to handle things if similar things were to happen on Mainnet.

c. Pectra contributions: Immensely proud of all the Pandas ❤️. This was the first fork where we pushed devs towards interoping with Kurtosis locally, and the tighter feedback loop definitely had a huge positive impact. Pari and Barnabas did a great job with the whip (as always!). Their role was supported from the shadows by Rafael and PK who churned out tools and features at a ridiculous speed and at such high quality. It's hard to summarize Andrew & Matty's contributions but I'll try: Knowledgable, Yielding, and Supportive.

EIP-7691 was Andrew, Pari and I's first EIP to be included in a fork, and I'm really proud of the way we approached it. Once Dencun landed, we knew there would be demand for a blob throughput increase in the next fork, and started capturing as much data as possible. A special shoutout to Mikel, Yiannis and Dennis from the ProbeLab team who created Hermes which gave us more visibility in to the p2p network on the consensus layer around this time. We eventually ended up opening up the Xatu data pipeline so that home stakers could contribute their data, and this was a true Silver Bullet in justifying the EIP.

As the fork was solidifying, we presented our findings in this post which I'm especially proud of. A shoutout to Francis from the Base team who was also on this journey with us, and who also presented data and justifications for the EIP to be included!

e. Pectra kudos: Matty joined our team in November 2024, and his work with contributoor made a huge difference in us onboarding home stakers in a sustainable way. He truly went the extra mile to make the entire process as pain free as possible for those sending us data.

f. Pectra 1-3 words: Anxious excitement

g. future priorities: Scale L1 AND scale blobs

33. Saulius

  • Client implementer

  • Grandine

a. Pectra successes: We finally have clients released that support Pectra. So overall the process delivered. However, during Pectra we learned more what to avoid do next time instead of what to do next time.

b. Pectra improvements: Pectra became a release without a very clear flagship feature. Instead, a lot of small changes that led to overall very complex release without that much of value for the end user. We should avoid such hardforks in the future. Instead we should shape the hardforks around key feature(s) that has huge value for end users.

c. Pectra contributions: The main thing is probably a numerous optimizations that we implemented after Holesky incident that made Grandine way stronger against such incidents (hopefully never happens again) in the future. We also optimised rust-kzg library a lot.

e. Pectra kudos: All the folks that do the "invisible" work. All the authors of dependencies, all the support folks such as testing, devops etc.

f. Pectra 1-3 words: Amazing!

g. future priorities: PeerDAS and anything else that has huge end-user value and marketing value.

h. future improvements: Faster shipment of hardforks. This could be achieved by parallel development of hardforks (PeerDAS was developed in parallel with Pectra). Full focus on a single or small amount of EIPs in a hardfork that brings value to users

34. Simon Dudley

  • client implementer

  • Besu

a. Pectra successes: It was a good decision to split the fork. I was impressed with the speed at which the testnet incidents were diagnosed. As someone who can rarely attend meetings due to timezones, the move towards more async discussion and emphasis on ethmagicians is good.

b. Pectra improvements: We should have cut the scope sooner, but that's easy to say in hindsight. It felt like there were lots of last minutes changes to the specs, so I think prototyping EIPs and considering all aspects of it is really important. Testnet incident response felt a little uncoordinated at times, although we diagnosed and fixed the issues promptly. Happy to see that being addressed for mainnet with an incident response plan.

d. Pectra challenges: The test coverage gets better every fork. The devnet issues I looked at were generally trickier than last fork, which may be because the tests had already caught the more obvious issues. I am mindful of the possibility it could also be due to the complexity of the changes, but the devnet issues were quite specific to Besu. Overall, this feels like a positive outcome and improvement on the last fork: many issues were caught by the tests meaning we were spending less time debugging obvious issues on the devnets.

The testnet incidents revolved around the deposit contract and how it differed between mainnet and the testnets. There many compounding factors from my perspective and various ways we might have avoided this issue such as specifying the testnet differences in the relevant EIPs. This is a little unusual since testnets are transient whereas the EIP is supposed to be a long-lived document, but I think there's room for pragmatism here. In Besu most of this code had been around for a while so perhaps there is some cost to implementing features early and not having an active champion for the feature.

The Pectra system contract architecture added a new architectural mechanism into the protocol in the form of system calls. The deposit contract is grouped with these at the code level but it is an odd one out due to the way it is used and the fact it existed before Pectra. The testnet issues were simple mistakes but in the case of Holesky had a very large impact. I am confident that we have learned some valuable lessons and ways to make the clients more robust, which afterall is the point of testing.

e. Pectra kudos: Reviewing the Besu code for EIP-7685 (General purpose execution layer requests) was a delight to see how much code it removed. Props to the authors, lightclient and Felix Lange!

f. Pectra 1-3 words: Relieved

g. future priorities: Now that we have come to consensus on hardware requirements, we should scale the gas limit as much as possible while minimising the impact on decentralisation.

h. future improvements: I strongly favour two forks per year, but have a degree of skepticism about the potential pitfalls of only attempting upgrades that fit into that timeframe. Clearly we can work on long upgrades in parallel, potentially multiple forks in advance to address this. This is already how we work to an extent, but it is more of a challenge to coordinate and allocate resources for this when the timeframe is shorter. In order to SFI an EIP, even the "small" ones, I'd like to see that it has been prototyped and well thought out because Pectra had too many last minute spec changes in my opinion. I think we should strongly favour async mechanisms over synchronous calls for decision making.

35. somnergy

  • EL Client Developer

  • Erigon

a. Pectra successes: - Good decisions about inclusions of the EIPs: they were scoped well and keeping in mind maximal benefit to the ecosystem

  • Devnets were routinely spec’ed well and spun up

  • Tests were timely, complete and easy to run. Also this time around EESTs delivered tests for devnets much ahead of time, catering to a rigorous test-driven development

  • More realistic tests with Kurtosis and Ethereum Package were spot on!

b. Pectra improvements: - Client implementation bugs caused the Holesky fork: clents should verify all aspects of config and their changes with regards to new EIPs

  • Client diversity was poor, as majority of the Holesky network: leading to a bad situation become an absolute disaster

  • The EL-CL combined EIPs were initially poorly thought out and it led to several shuffles of the Execution-APIs leading to a lot of wasted time

  • Many of the developers are oblivious to the new EIPs and new features on the network: this leads to situations where simple bugs may be missed by the 2-3 people working on them.

  • Even when we knew there is no such thing as “small fork” many of us insisted on it during the initial phases.

  • Block-building tests were missing often

  • RPC testing was not complete, especially considering EIP-7702 cases

e. Pectra kudos: Shout out to EF Devops and EF Testing (STEEL) teams for their meticulous work driving this fork on their shoulders

f. Pectra 1-3 words: A risky ride

g. future priorities: - EVM enhancements

  • Post-Quantum safety

  • Privacy-focussed cryptography

  • Transaction Throughput

h. future improvements: - Fund to include: Don’t mess around with ambitious projects that are not prioritized for inclusion, like Verkle

  • Coordinate outside: Actively bring guests into the loop to share their ideas, only a tight knit group of 200 people shouldn’t be relied upon for bringing ideas in

  • Scoping Casual EIPs could be better: Ton of quality of life improvements get filtered out because they don’t add that much value, and a ton of risky EIPs get dropped after being discussed for long. We should parallelize devnet process into different levels of risk factor and have some scoring internally as to how serious we are to include them (we already do this but only to some extent). Afterwards, once it's finalized then involve every client team to implement the first devnet: not the other way round!

36. sproul

  • Client implementer

  • Lighthouse

a. Pectra successes: I think the consensus-specs changes were handled very well, as usual. Each testnet had clear requirements, and the test vectors gave us confidence in our implementation prior to spinning up nodes.

b. Pectra improvements: The Holesky debacle was not ideal, especially followed by the fallout of bugs found during the security competition. I think a slightly less aggressive upgrade schedule would have given us a little more space to breathe, or switching back to the old upgrade order of Sepolia first followed by Holesky. Waiting for the security reviews to complete prior to testnet upgrades would have also taken some pressure off the fix/release process.

I think scoping for Pectra was also a real issue, as MaxEB turned out to be a complex beast in its own right, and we had to ship it alongside a grab-bag of other changes (looking at you, SingleAttestation).

c. Pectra contributions: I'm proud of our adapted single-pass epoch processing algorithm. We wrote it prior to Electra under assumptions that are no longer true with the fork, and we had to delicately adapt it. We did make one error (that we know of!) in this adaptation process, which we are grateful was caught by an independent security researcher. However, on the whole, I'm very proud of this work, and think we did about as well as we could given the brief.

d. Pectra challenges: For me this fork was the most difficult since the early beacon chain days (Medalla). The complexity of the spec changes on the CL side was the greatest we've had in a fork so far, and I think this really contributed to the difficulty. Having to then deal with non-finality issues on Holesky was also extremely frustrating, because in a way many of them were the same things we struggled with on Medalla 5 years ago. We knew these issues existed, and had been working on structural changes to address them, so it was unfortunate we had to implement hacky fixes on Holesky before the proper fixes were fully ready. On the plus side, this has drawn attention to this issue and we will hopefully have the proper fixes merged soon.

e. Pectra kudos: alexfilippov314, for responsibly disclosing a severe consensus bug in Lighthouse prior to it going live on mainnet. You're a legend and we can't thank you enough.

f. Pectra 1-3 words: relieved

g. future priorities: PeerDAS, because it's already close to completion and helps us scale blobs. However my main passion right now is improving the network's resilience during non-finality, and I think this is more pressing to get right than any upgrade. I'll be working on new improvements for Lighthouse's database and state handling during non-finality, and look forward to testing these on bumpy testnets.

h. future improvements: Minimise scope. Random "nice to have" changes should be treated with extreme suspicion, due to the complexity of maintaining old and new implementations long-term for backwards compatibility. If we can't delete the old implementation when making an improvement, we shouldn't do it unless the benefits far outweigh the added complexity.

37. taranpreet.eth

  • DevOps

  • Prysm

a. Pectra successes: I think there could be more user and team input. I feel for the most part it has been EF research that has had the most say on Ethereum roadmap

c. Pectra contributions: Setting up testnet, managing Infra, and generally keeping everything smooth and running.

d. Pectra challenges: Lots more testnet failures, I have been working on Ethereum before the merge and this fork has had the most problems in testing by far.

e. Pectra kudos: Nishant Das, it is his last fork but he put so much effort and worked 12 hours almost everyday it seemed

f. Pectra 1-3 words: A chaotic challenge

g. future priorities: More democratised process for how ETH should prioritise roadmap and features.

38. terence

  • Hard fork celebration optimist

  • Prysm

a. Pectra successes: Everything related to testing was one area that went really well in Pectra. Huge kudos to ethpandaops for being diligent, coordinating across all the client teams, launching devnets, and constantly following up on bugs. Shoutout to the consensus spec tests team for tirelessly working on test vectors and helping uncover bugs and tricky protocol edge cases. And major thanks to the execution spec tests team for making sure all the execution layer complexity was thoroughly covered.

b. Pectra improvements: Some decisions during Pectra took longer than needed. Relying solely on ACD calls slowed things down because of their weekly schedule. We should shift toward more frequent and open discussions using breakout rooms and async Discord chats to keep everyone in the loop. ACD calls should be used for final decision-making, not as the main place for discussion. Unifying the call structure for ACDE and ACDC would also help improve overall efficiency.

c. Pectra contributions: I'm proud of how we handled the attestation changes (shoutout to Radek’s work) — it wasn’t easy, but I think we managed it well. EIPs can seem simple on the surface, but their real complexity often shows up during implementation. This experience reinforced the importance of evaluating EIPs and client changes more carefully before deciding what to include.

d. Pectra challenges: Testnets were more eventful than usual — both Holesky and Sepolia ran into issues, and we ended up spending a few weeks recovering Holesky. We learned a lot from the process, but it was definitely more stressful than previous forks.

e. Pectra kudos: Big shoutout to everyone at ethpandaops! And an even bigger shoutout to everyone involved in testing — your hard work kept things moving and caught issues early.

f. Pectra 1-3 words: Im tired boss

g. future priorities: Scale scale scale, whether that's execution scaling or data scaling. If we don't scale, we die. If we don't focus on users and product, we also die

h. future improvements: Aim for a hard fork every six months, alternating between major feature upgrades and smaller improvements focused on UX and quality of life. This cadence reduces pressure on each fork and allows more time to test complex changes. Kick off devnets for upcoming forks earlier. While one fork is in testing, begin spinning up a rolling devnet for the next to explore candidate EIPs and draft specifications. This helps avoid delays from late-stage scope debates. EIP champions should take greater ownership of their proposals—ensuring specs are well-tested, collaborating closely with testing teams, and clearly articulating how their EIPs contribute to the roadmap or enhance network security.

39. Tim Beiko

  • AllCoreDevs Chair

  • Protocol Support (EF)

a. Pectra successes: Two things come to mind. First, the iteration process that led to EIP-7702. This was a good example of different stakeholders in the Ethereum community working together towards a pragmatic compromise solution that will greatly benefit users. Second, pairing the blob count increase with EIP-7623. This was a clever way to enable Ethereum to safely scale by bounding the worst-case EL bandwidth usage to allow for higher average CL throughput.

b. Pectra improvements: We should have split Petra, or limited its scope, far sooner. We were too ambitious coming out of Nyota and didn't have sufficiently robust processes to scope the fork. Not allowing this to happen for Fusaka is my #1 priority!

c. Pectra contributions: The networking analyses done for the blob count increase by both PandaOps and ProbeLabs were excellent and are exactly the type of work that should be done to justify further similar changes.

d. Pectra challenges: The testnet issues highlighted that ACD has outgrown the stage where informal social coordination works. There are more teams, more people, and more users who are involved in the process and we need to step up to clarify who owns what and when, so that we avoid things slipping through the cracks and compounding issues.

e. Pectra kudos: Alex Stokes: this fork was a massive undertaking on the CL side, with PeerDAS eventually splitting off into another major thread, all of it during a fairly turbulent time at the EF. Thank you for helping to steer the ship, both from the captain's seat and the engine room 🚢

f. Pectra 1-3 words: Hold steady, focus.

g. future priorities: I'm less concerned about "which" than "why": we need to have a strong rationale for everything we do on Ethereum. A clear articulation of the value for a feature is paramount, complete technical specifications should be necessary, not sufficient.

h. future improvements: Everything I've listed here 😄

40. Toni Wahrstätter

  • Researcher

  • Applied Research Group

a. Pectra successes: In Pectra, everyone gets something. Users get 7702, rollups more blobs, nodes and validators get smaller block sizes and MaxEB and devs the BLS precompile. Despite the multitude of potential hardliners pre-interop (MaxEB, Verkle, PeerDAS, EOF), we quickly narrowed it down to one of them and postponed the others for later forks. Pectra still got big but at least we were able to make the cut and prevent further delays.

c. Pectra contributions: Analysis and experiments leading to the "Sepolia Incident" that was fixed before something happened.

e. Pectra kudos: The EthPandaOps team crushed it this past year. Pectra was for sure not an easy fork for them, but they delivered.

f. Pectra 1-3 words: feelin' good

g. future priorities: L1 Privacy + Scaling

h. future improvements: Be more open and welcoming to feedback and perspectives from the community, including developers and users.

41. wemeetagain

  • client implementer

  • Lodestar

b. Pectra improvements: Early on in the development cycle, we got caught off-guard by the attestation refactoring. What initially seemed like a small change quickly became a much bigger refactor that affected large parts of our codebase.

Generally, we need to get more aligned about medium and long term vision, and what that implies for features, specifically around node/staker requirements, mev, block building, and censorship resistance.

d. Pectra challenges: This felt like the easiest fork yet(?) considering our work on client development. From a security perspective, this one was quite nerve-wracking, considering the additional complexity added to the state transition and spec bugs found late in the testing cycle.

f. Pectra 1-3 words: time to scale

g. future priorities: scaling, web3 purification, zk compatibility. Speaking to web3 purification (the other items are already oversold): Additional focus on consumability of verifiable chain data will become more important as Ethereum is integrated into other systems to avoid security theater.

h. future improvements: It would be nice to have a better visualization of EIP progress tracking -- may be a nice hackathon project.



___________________________________________________________________

Thanks for reading to the end! Let’s continue building towards better worlds on Ethereum.

Subscribe to Stateful Works
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.