Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#0: Add missing MeshBuffer APIs #18817

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

#0: Add missing MeshBuffer APIs #18817

wants to merge 1 commit into from

Conversation

tt-dma
Copy link
Contributor

@tt-dma tt-dma commented Mar 7, 2025

Let's discuss which Buffer API functions we want to add in MeshBuffer. Per Joseph, there's a bunch of convenience methods that we would like to get in (allocator(), is_l1(), buffer_layout(), bottom_up(), etc.), but it's not quite clear which others we'll need for the ttnn integration.

For now, the commit here has every single function implemented, and I can drop blocks of them as we deem fit.

uint32_t num_pages() const { return page_size() == 0 ? 0 : device_local_size_ / page_size(); }
uint32_t num_dev_pages() const;
Copy link
Contributor Author

@tt-dma tt-dma Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used in one op: ttnn/cpp/ttnn/operations/data_movement/sharded/reshard/device/reshard_program_factory.cpp

uint32_t num_dev_pages() const;

BufferType buffer_type() const { return device_local_config_.buffer_type; }
CoreType core_type() const;
Copy link
Contributor Author

@tt-dma tt-dma Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Used in: ttnn/cpp/ttnn/operations/data_movement/sharded/reshard/device/reshard_program_factory.cpp only as well

bool is_trace() const;

bool is_valid_region(const BufferRegion& region) const;
bool is_valid_partial_region(const BufferRegion& region) const;
Copy link
Contributor Author

@tt-dma tt-dma Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two are only used in tests/tt_metal/tt_metal/api/test_buffer_region.cpp and tt_metal/impl/buffers/dispatch.cpp


TensorMemoryLayout buffer_layout() const { return device_local_config_.buffer_layout; }

bool bottom_up() const { return device_local_config_.bottom_up.value(); }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No ttnn uses, but used in allocator/global_semaphore/lightmetal

bool bottom_up() const { return device_local_config_.bottom_up.value(); }

DeviceAddr page_address(uint32_t bank_id, uint32_t page_index) const;
DeviceAddr bank_local_page_address(uint32_t bank_id, uint32_t page_index) const;
Copy link
Contributor Author

@tt-dma tt-dma Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two are used in metal api, and ttnn reports

std::optional<uint32_t> num_cores() const;
const std::shared_ptr<const BufferPageMapping>& get_buffer_page_mapping();
std::optional<SubDeviceId> sub_device_id() const;
size_t unique_id() const { return unique_id_; }
Copy link
Contributor Author

@tt-dma tt-dma Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just used in lightmetal + ttnn/cpp/ttnn/graph/graph_processor.cpp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant