-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trusted auctioneer #30
Conversation
} | ||
|
||
func (ms *MockServerSideStreaming[K]) SendMsg(m any) error { | ||
//TODO implement me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this need to be done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no no, these methods are not being used in the streaming methods so we don't have to implement them! I ll write better comments on these!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
|
||
totalCost := big.NewInt(0) | ||
effectiveTip := cmath.BigMin(pendingTx.GasTipCap(), new(big.Int).Sub(pendingTx.GasFeeCap(), optimisticBlock.BaseFee)) | ||
totalCost = totalCost.Mul(effectiveTip, big.NewInt(int64(pendingTx.Gas()))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the tip in gas? if so should this be added not multiplied?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tip = tip_fee * tx_gas
. So i believe it should be multiplied. This tip is in wei which is the absolute amount being paid to get the tx included.
grpc/optimistic/server.go
Outdated
return nil | ||
} | ||
if err != nil { | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrap in status.Error
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, will do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
Block: optimisticBlock, | ||
BaseSequencerBlockHash: baseBlock.SequencerBlockHash, | ||
}) | ||
case <-time.After(500 * time.Millisecond): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where does this deadline come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is kind of arbitrary. 0.5s is too long for the mempool clearance to not take place. We can potentially reduce it too.
grpc/optimistic/server.go
Outdated
return nil, status.Error(codes.Internal, shared.WrapError(err, "failed to convert executable data to block").Error()) | ||
} | ||
|
||
// this will insert the optimistic block into the chain and persist it's state without |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// this will insert the optimistic block into the chain and persist it's state without | |
// this will insert the optimistic block into the chain and persist its state without |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
|
||
// ValidateBech32mAddress verifies that a string in a valid bech32m address. It | ||
// will return nil if the address is valid, otherwise it will return an error. | ||
func ValidateBech32mAddress(address string, intendedPrefix string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do these need to be exported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, because the execution server is also using.
flame/grpc/execution/server.go
Line 226 in 211b66d
if err := shared.ValidateBech32mAddress(address, addressPrefix); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just validate all of the addresses on parse of the config or in sharedcontainer.
Admittedly this logic will need updating with newest geth changes, so probably best not to over optimize right now.
"github.com/btcsuite/btcd/btcutil/bech32" | ||
) | ||
|
||
type Address struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a comment saying these represent astria (?) addresses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
grpc/shared/container.go
Outdated
if err := ValidateBech32mAddress(address, bc.Config().AstriaSequencerAddressPrefix); err != nil { | ||
return nil, WrapError(err, fmt.Sprintf("auctioneer address %s at height %d is invalid", address, height)) | ||
} | ||
auctioneerAddress = address |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
break here? i'm assuming there should only be one address that fits this criteria?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no there shouldn't a break here. We are searching in the map where the key is the block number and the value is the address. We want to search for the address valid highest block lesser than the current block. Let's take the following case:
auctioneerAddressMap := {
10: "astria123",
20: "astria456",
30: "astria789"
}
If the current block is 25. We want address "astria456". We start with maxHeightCollectorMatch
as 0.
The loop will first stop at 10, which valid as it is lesser than 25 and also greater than 0. But it is not the correct address since we want "astria456". But we set auctioneerAddress
as "astria123"
We then set maxHeightCollectorMatch
to 10 and move on.
The loop will then stop at 20, which matches since it is lesser than 25 and also greater than 20. We then set auctioneerAddress
as "astria456".
The loop will then stop at 30, which doesn match since it is not lesser than 25.
We break out and are remaining with "astria456"
Essentially we are checking which range the current block number falls into
grpc/shared/rollupdata_handling.go
Outdated
signature := allocation.GetSignature() | ||
if !ed25519.Verify(publicKey, message, signature) { | ||
allocationsWithInvalidSignature.Inc(1) | ||
return nil, fmt.Errorf("signature in allocation does not match the public key") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return nil, fmt.Errorf("signature in allocation does not match the public key") | |
return nil, fmt.Errorf("signature in allocation is invalid") |
could be due to wrong pubkey, wrong message, or wrong signature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
grpc/shared/rollupdata_handling.go
Outdated
// `RollupData` we log the error and continue processing the rest of the transactions. We do not want to break control flow | ||
// for an invalid transaction as we do not want to interrupt block production. | ||
// TODO - This function has too many arguments. We should consider refactoring it. | ||
func UnbundleRollupDataTransactions(txs []*sequencerblockv1.RollupData, height uint64, bridgeAddresses map[string]*params.AstriaBridgeAddressConfig, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't need to export
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to export cause the execution package and the optimistic package use it
grpc/shared/rollupdata_handling.go
Outdated
return ethTx, nil | ||
} | ||
|
||
func unmarshallAllocationTxs(allocation *auctionv1alpha1.Allocation, prevBlockHash []byte, auctioneerBech32Address string, addressPrefix string) (types.Transactions, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func unmarshallAllocationTxs(allocation *auctionv1alpha1.Allocation, prevBlockHash []byte, auctioneerBech32Address string, addressPrefix string) (types.Transactions, error) { | |
func unmarshalAllocationTxs(allocation *auctionv1alpha1.Allocation, prevBlockHash []byte, auctioneerBech32Address string, addressPrefix string) (types.Transactions, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
grpc/shared/rollupdata_handling.go
Outdated
return types.NewTx(&txdata), nil | ||
} | ||
|
||
func validateAndUnmarshallSequenceAction(tx *sequencerblockv1.RollupData) (*types.Transaction, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func validateAndUnmarshallSequenceAction(tx *sequencerblockv1.RollupData) (*types.Transaction, error) { | |
func validateAndUnmarshalSequenceAction(tx *sequencerblockv1.RollupData) (*types.Transaction, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
grpc/shared/rollupdata_handling.go
Outdated
if allocation != nil { | ||
log.Debug("ignoring allocation tx as it is a duplicate", "height", height) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rewrite this so that you don't try to unmarshal into tempAllocation
if allocation != nil
already, it's an unnecessary unmarshal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it gets a bit funky with multiple if-else's without this but I think its better have that than redundant marshalling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
grpc/shared/rollupdata_handling.go
Outdated
processedTxs := types.Transactions{} | ||
allocationTxs := types.Transactions{} | ||
|
||
var allocation *auctionv1alpha1.Allocation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't think we actually need to save the allocation, this can be a allocationFound bool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
grpc/shared/rollupdata_handling.go
Outdated
if height < auctioneerStartHeight { | ||
log.Debug("ignoring allocation tx as it is before the auctioneer start height", "height", height, "auctioneerStartHeight", auctioneerStartHeight) | ||
continue | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here - don't try to unmarshal if height < auctioneerStartHeight
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in defc35b
cmd/utils/flags.go
Outdated
@@ -18,6 +18,7 @@ | |||
package utils | |||
|
|||
import ( | |||
auctionGrpc "buf.build/gen/go/astria/execution-apis/grpc/go/astria/auction/v1alpha1/auctionv1alpha1grpc" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: put import with the other non native imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed here: c969f99
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving, don't see any major issues, although it's a large PR and there is certainly lots of cleanup to be done.
The one thing I want to see fixed before merge are the errors which are wrapped on return from execution API which should be the grpc status errors.
|
||
auctionServiceV1Alpha1 := optimistic.NewAuctionServiceV1Alpha1(sharedService) | ||
|
||
utils.RegisterGRPCServices(stack, serviceV1a2, auctionServiceV1Alpha1, auctionServiceV1Alpha1, &cfg.Node) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this take the auctionServiceV1Alpha1
twice?
Should we make this one parameter into RegisterGRPCServices
OR split these up into independent services?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah so it takes in the optimistic execution service and also the auction service both of which have been implemented by the AuctionServiceV1Alpha1
. I don't think we should make them one parameter but we should in the future split them into independent services. I didn't split them now because their functionality didnt seem big enough to warrant separate services. but a good idea to keep this in mind going forward.
optimisticExecServ: &optimisticExecServ, | ||
auctionServiceServ: &auctionServiceServ, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems as though these are always the same serv object, perhaps only 1 param in and reuse? Or should we split the services?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets split the services in the future. having one object can be confusing and also if we do that, it ll require effort to split the services later again.
node/defaults.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: these changes aren't really necessary and will complicate any updates
@@ -273,7 +275,7 @@ func (c *Config) HTTPEndpoint() string { | |||
return net.JoinHostPort(c.HTTPHost, fmt.Sprintf("%d", c.HTTPPort)) | |||
} | |||
|
|||
// GRPCEndpoint resolves a gRPC endpoint based on the configured host interface |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this change isnt't really necessary and will complicate any updates from upstream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 23a2bd7
node/grpcstack.go
Outdated
@@ -49,11 +61,13 @@ func (handler *GRPCServerHandler) Start() error { | |||
} | |||
|
|||
// Start the gRPC server | |||
lis, err := net.Listen("tcp", handler.endpoint) | |||
tcpLis, err := net.Listen("tcp", handler.endpoint) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again nit on unnecessary naming and line changes here down to end of file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will undo the naming change. i had done this when we were considering UDS. good idea to not to do unecessary naming changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 23a2bd7
// Iterate over all accounts and demote any non-executable transactions | ||
addrsForWhichTxsRemoved := map[common.Address]bool{} | ||
|
||
for addr, list := range pool.pending { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like this should be easier, along with the clearList
functionality. clear is basically just saying "start from total scratch".
We could just actually reset?
pool.pending = make(map[common.Address]*list)
pool.beats = make(map[common.Address]*list)
pool.queue = make(map[common.Address]*list)
Main issue here would be the reserver, which we could solve by modifying the reserver function:
Line 118 in c969f99
return func(addr common.Address, reserve bool) error { |
We could make it such that if the address is the default address it just clears out the reserved entirely, then reserve the default address?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes this would definitely be cleaner and probably faster. I am not really comfortable making this optimisation right now. I took a lot of code for this method from demoteUnexecutables
to be doing things as close as possible to how geth does it.
grpc/execution/server.go
Outdated
@@ -183,7 +101,7 @@ func (s *ExecutionServiceServerV1) GetBlock(ctx context.Context, req *astriaPb.G | |||
res, err := s.getBlockFromIdentifier(req.GetIdentifier()) | |||
if err != nil { | |||
log.Error("failed finding block", err) | |||
return nil, err | |||
return nil, shared.WrapError(err, "failed finding block") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I don't think we want to wrap these errors, they are specifically gRPC error codes and are a part of the API vs something internal for tracing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 23a2bd7
grpc/execution/server.go
Outdated
@@ -208,7 +126,7 @@ func (s *ExecutionServiceServerV1) BatchGetBlocks(ctx context.Context, req *astr | |||
block, err := s.getBlockFromIdentifier(id) | |||
if err != nil { | |||
log.Error("failed finding block with id", id, "error", err) | |||
return nil, err | |||
return nil, shared.WrapError(err, fmt.Sprintf("failed finding block with id %s", id.String())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrapped error as return type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 23a2bd7
|
||
// ValidateBech32mAddress verifies that a string in a valid bech32m address. It | ||
// will return nil if the address is valid, otherwise it will return an error. | ||
func ValidateBech32mAddress(address string, intendedPrefix string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just validate all of the addresses on parse of the config or in sharedcontainer.
Admittedly this logic will need updating with newest geth changes, so probably best not to over optimize right now.
grpc/shared/container.go
Outdated
for _, tx := range txs { | ||
if deposit := tx.GetDeposit(); deposit != nil { | ||
depositTx, err := validateAndUnmarshalDepositTx(deposit, height, s.BridgeAddresses(), s.BridgeAllowedAssets()) | ||
if err != nil { | ||
log.Error("failed to validate and unmarshal deposit tx", "error", err) | ||
continue | ||
} | ||
|
||
processedTxs = append(processedTxs, depositTx) | ||
} else { | ||
sequenceData := tx.GetSequencedData() | ||
|
||
if !foundAllocation && height >= s.AuctioneerStartHeight() { | ||
// check if sequence data is of type Allocation. | ||
// we should expect only one valid allocation per block. duplicate allocations should be ignored. | ||
allocation := &auctionv1alpha1.Allocation{} | ||
err := proto.Unmarshal(sequenceData, allocation) | ||
if err == nil { | ||
unmarshalledAllocationTxs, err := unmarshalAllocationTxs(allocation, prevBlockHash, s.AuctioneerAddress(), s.Bc().Config().AstriaSequencerAddressPrefix) | ||
if err != nil { | ||
log.Error("failed to unmarshall allocation transactions", "error", err) | ||
continue | ||
} | ||
|
||
// we found the valid allocation, we should ignore any other allocations in this block | ||
allocationTxs = unmarshalledAllocationTxs | ||
foundAllocation = true | ||
} else { | ||
ethtx, err := validateAndUnmarshalSequenceAction(tx) | ||
if err != nil { | ||
log.Error("failed to unmarshall sequence action", "error", err) | ||
continue | ||
} | ||
processedTxs = append(processedTxs, ethtx) | ||
} | ||
} else { | ||
ethtx, err := validateAndUnmarshalSequenceAction(tx) | ||
if err != nil { | ||
log.Error("failed to unmarshall sequence action", "error", err) | ||
continue | ||
} | ||
processedTxs = append(processedTxs, ethtx) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for _, tx := range txs { | |
if deposit := tx.GetDeposit(); deposit != nil { | |
depositTx, err := validateAndUnmarshalDepositTx(deposit, height, s.BridgeAddresses(), s.BridgeAllowedAssets()) | |
if err != nil { | |
log.Error("failed to validate and unmarshal deposit tx", "error", err) | |
continue | |
} | |
processedTxs = append(processedTxs, depositTx) | |
} else { | |
sequenceData := tx.GetSequencedData() | |
if !foundAllocation && height >= s.AuctioneerStartHeight() { | |
// check if sequence data is of type Allocation. | |
// we should expect only one valid allocation per block. duplicate allocations should be ignored. | |
allocation := &auctionv1alpha1.Allocation{} | |
err := proto.Unmarshal(sequenceData, allocation) | |
if err == nil { | |
unmarshalledAllocationTxs, err := unmarshalAllocationTxs(allocation, prevBlockHash, s.AuctioneerAddress(), s.Bc().Config().AstriaSequencerAddressPrefix) | |
if err != nil { | |
log.Error("failed to unmarshall allocation transactions", "error", err) | |
continue | |
} | |
// we found the valid allocation, we should ignore any other allocations in this block | |
allocationTxs = unmarshalledAllocationTxs | |
foundAllocation = true | |
} else { | |
ethtx, err := validateAndUnmarshalSequenceAction(tx) | |
if err != nil { | |
log.Error("failed to unmarshall sequence action", "error", err) | |
continue | |
} | |
processedTxs = append(processedTxs, ethtx) | |
} | |
} else { | |
ethtx, err := validateAndUnmarshalSequenceAction(tx) | |
if err != nil { | |
log.Error("failed to unmarshall sequence action", "error", err) | |
continue | |
} | |
processedTxs = append(processedTxs, ethtx) | |
} | |
} | |
} | |
allocation := &auctionv1alpha1.Allocation{} | |
for _, tx := range txs { | |
switch { | |
case tx.GetDeposit() != nil: | |
deposit := tx.GetDeposit() | |
depositTx, err := validateAndUnmarshalDepositTx(deposit, height, s.BridgeAddresses(), s.BridgeAllowedAssets()) | |
if err != nil { | |
log.Error("failed to validate and unmarshal deposit tx", "error", err) | |
continue | |
} | |
processedTxs = append(processedTxs, depositTx) | |
case !foundAllocation && height >= s.AuctioneerStartHeight() && proto.Unmarshal(tx.GetSequencedData(), allocation) == nil: | |
unmarshalledAllocationTxs, err := unmarshalAllocationTxs(allocation, prevBlockHash, s.AuctioneerAddress(), s.Bc().Config().AstriaSequencerAddressPrefix) | |
if err != nil { | |
log.Error("failed to unmarshall allocation transactions", "error", err) | |
continue | |
} | |
// we found the valid allocation, we should ignore any other allocations in this block | |
allocationTxs = unmarshalledAllocationTxs | |
foundAllocation = true | |
default: | |
ethtx, err := validateAndUnmarshalSequenceAction(tx) | |
if err != nil { | |
log.Error("failed to unmarshall sequence action", "error", err) | |
continue | |
} | |
processedTxs = append(processedTxs, ethtx) | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 23a2bd7
864e908
to
23a2bd7
Compare
## Summary Implements a new chart dedicated for the flame package and also integrates the new chart with the astria stack. The `evm-stack` now support running `astria-geth` or the flame fork with an auctioneer out of the box. ## Background To support auctioneer deployment we need to run geth [flame fork](https://github.com/astriaorg/flame), which implements the necessary changes in [pr-30](astriaorg/flame#30). ## Changes - adds a new flame-rollup chart - adds the new chart and auctioneer as dependencies of the `evm-stack` - new commands and dev value file for local deployment replace `just deploy-dev-rollup` with `just deploy-flame-dev-rollup` in the deployment process to run `flame` instead of `astria-geth`. Notes: - handle images tags on final merge. - does not implement any new smoke tests or workflows. ## Testing running a local cluster and monitoring geth logs. `│ INFO [02-04|11:25:13.608] ExecuteOptimisticBlock completed block_num=52 timestamp=seconds:1738668312 ` ## Changelogs No updates required.
## Summary This patch adds the Astria Auctioneer service to the monorepo. ## Background The Astria Auctioneer auctions the top of the next Rollup to the highest bidder. It receives a proposed Sequencer block[^1] via a Sequencer node's `astria.sequencerblock.optimistic.v1alpha1.OptimisticBlock` service, and forwards it to its Rollup node's `astria.auction.v1alpha1.OptimisticExecutionService` for optimistic execution by the Rollup. The executed optimistic block hash returned by the rollup triggers Auctioneer to start an auction. It then receives bids from the Rollup node's `astria.auction.v1alpha.AuctionService`, selecting the winner using a first-price mechanism. The auction winner is finally submitted to the same Sequencer node using a standard ABCI `broadcast_tx_sync` to the Sequencer network. Auctioneer does not act as a server in its own right but connects as a client to a Rollup node and to a Sequencer node. [^1]: A proposed Sequencer block is that data structure that comes out of the Astria Sequencer network's CometBFT process-proposal step. ## Changes - Add crate `crates/astria-auctioneer`. It is implemented as a simple event loop that multiplexes proposed blocks and finalized commits (from sequencer), as well as executed rollup blocks and bids (from the rollup). - Bump workspace dependency `tokio_utils` to `0.7.13` to get access to `CancellationToken::run_until_cancelled` - Rename execution APIs `astria.bundle` to `astria.auction` - Add domain type `SequencerBlockCommit` to new `astria_core::sequencerblock::optimistic::v1alpha1` submodule, following protobuf spec naming conventions (the other domain types remain exclusive to auctioneer for now). - Implement `astria_core::Protobuf` for `FilteredSequencerBlock` to get access to the wiretype name in error and log messages. - Refactor `astria_sequencer::sequencer::start_grpc_server` to `astria-sequencer::grpc::serve` - Add module `astria_sequencer::grpc::optimistic` implementing the `astria.sequencerblock.optimistic.v1alpha1.OptimisticBlock` gRPC service - Refactor `astria_sequencer::app` module to implement an `EventBus` that is used by the gRPC service to subscribe to new events (right now, only proposed blocks and commits). - Add setting `ASTRIA_SEQUENCER_NO_OPTIMISTIC_BLOCKS` to toggle the optimistic block service in Sequencer - Add chart `charts/auctioneer` - Update `charts/evm-rollup/files/genesis/geth-genesis.json` to set `astriaAuctioneerAddresses` - Update `charts/sequencer/templates/configmaps.yaml` to to set `ASTRIA_SEQUENCER_NO_OPTIMISTIC_BLOCKS` - Update `justifle` to understand how to docker-build auctioneer - Add job `auctioneer` to docker-build github workflow. ## Testing This patch does not contain blackbox tests because there currently is no support in the Rust ecosystem to easily mock streams. Smoke tests to submit a winning bid to sequencer and have it executed against geth will be done in a follow-up PR. We deployed a local setup consisting of a dedicated auctioneer flame node, the auctioneer node and a sequencer with optimistic blocks enabled. The code for auctioneer flame node can be found at astriaorg/flame#30. The auctioneer has been tested locally against this auctioneer flame node branch. We have tested the setup by sending txs to the auctioneer flame node using spamooor. We check with logs and manually query the blocks to ensure that the tx sent has end up on the top of block of the auctioneer. ## Metrics - `astria_auctioneer_block_commitments_received`: counter (the number of block commitments auctioneer received from sequencer over its runtime) - `astria_auctioneer_executed_blocks_received`: counter (the number of executed blocks auctioneer received from its connected rollup over its runtime) - `astria_auctioneer_proposed_blocks_received`: counter (the number of proposed blocks auctioneer received from sequencer over its runtime) - `astria_auctioneer_auctions_cancelled`: counter (the auctions auctioneer cancelled over its runtime because a new proposed sequencer block cancelled a previous one and thus the auction; this might include auctions for which sumissions took too long) - `astria_auctioneer_auctions_submitted`: counter (the auctions auctioneer successfully submitted over its runtime) - `astria_auctioneer_auction_bids_received`: counter (total bids received over the runtime of auctioneer) - `astria_auctioneer_auction_bids: histogram` (bids per auction labels "processed" and "dropped") - `astria_auctioneer_auction_bids_without_matching_auction`: counter (the number of bids auctioneer received without having a matching auction running over its runtime; for example because the bid contained a difference sequencer or rollup block hashes than what the auction expected) - `astria_auctioneer_winner_submission_latency` histogram (labels "success" and "error"; time from when an auction started and auctioneer received a bid for the auction) - `astria_winning_bid`: histogram (the amount that the winning bid was willing to pay) ## Changelogs Changelogs updated. ## Related Issues closes #1888 closes #1533 closes #1820 --------- Co-authored-by: itamar <itamar.reif@gmail.com> Co-authored-by: Bharath <vedabharath12345@gmail.com> Co-authored-by: quasystaty <ido@astria.org>
This document contains the design of the flame side of trusted auctioneer at a high level: https://www.notion.so/astria-org/Trusted-auctioneer-Geth-design-1066bd31a90c80e2be60ddd01335dcda?pvs=4
To understand the code changes made in trusted auctioneer, please follow the following PRs in order:
#3
#4
#7
#8
#9
#11
#12
#16
#17
#21
#22
#24
#25
#28
#34
#36