Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DEEP Stuff 1 #16

Draft
wants to merge 19 commits into
base: main
Choose a base branch
from
Draft

DEEP Stuff 1 #16

wants to merge 19 commits into from

Conversation

gviga
Copy link
Collaborator

@gviga gviga commented Jan 29, 2025

Hi there,
Under @GiLonga request, I start creating this pull request for Deep functional map stuff implementation.
I have already developed some concepts, which I will briefly discuss, but first, I would like to design a roadmap for future development.
In general, the idea is to abstract as much as possible all the available implementations of Deep functional maps, along with models, losses, representations, ecc.
The literature about this argument is very extended, I will refer to some milestones here:

  1. Deep Functional Maps: Structured Prediction for Dense Shape Correspondence (https://arxiv.org/abs/1704.08686)
  2. Deep Geometric Functional Maps: Robust Feature Learning for Shape Correspondence (https://arxiv.org/pdf/2003.14286)
  3. Structured Regularization of Functional Map Computations ( https://ren-jing.com/files/slides_19_SGP_structured.pdf )
  4. Spatially and Spectrally Consistent Deep Functional Maps (https://arxiv.org/abs/2308.08871)
  5. Unsupervised Learning of Robust Spectral Shape Matching (https://dongliangcao.github.io/urssm/)
  6. DiffusionNet (https://arxiv.org/abs/2012.00888)
  7. Correspondence Learning via Linearly-invariant Embedding (https://arxiv.org/abs/2010.13136)

For the moment I have implemented two main ideas:
A) Descriptors by Feature extractors
B) Functional Map optimized as a forward pass

Principal Issues for now:

  1. The conversion of data from numpy to torch and viceversa--> we need a easy way to convert the two depending on the situations
  2. The Descriptors dimension (see Dimension of descriptors #15 )

RoadMAP:
a. Descriptors by Feature extractors
b. Functional Map optimized as a forward pass
c. losses
d. Dataset creation
e. Optimization loops

In this commit we create the initial structure to compute descriptors using feature extractors
Implementation of the noition of Forward functional map, with Notebook.
@gviga gviga changed the title Viga learned 1 Neural Stuff 1 Jan 29, 2025
@gviga gviga marked this pull request as draft January 29, 2025 15:54
@gviga gviga changed the title Neural Stuff 1 DEEP Stuff 1 Jan 31, 2025
@gviga
Copy link
Collaborator Author

gviga commented Feb 18, 2025

last commit UPDATE:
I rewrote some part of the coda about using feature extractor to compute descriptors.
I uniformed the implementation to previous choices.

Important notes:

  1. I added to_torch and to_numpy functions to mesh class
  2. For the moment, the output of feature extractors is uniformed to previous choices about descriptors, so the output is transposed and converted to numpy ( this choice should be investigated since at training time it's not feasible).

Howevr, for the moment is possible to use trained and untrained feature extractors directly in our pipelines also combining them with other descriptors in the 'pipeline'.

@gviga
Copy link
Collaborator Author

gviga commented Feb 25, 2025

Summary of Recent Developments
I've made significant progress on various modules, forming a solid foundation for a functional draft of the deep functional map pipeline. Below is an overview of the key updates and areas for discussion.

DATASET
To manage datasets, I made two major design choices:

a) Implemented to_torch and to_numpy functions for both the MeshClass and BasisClass (Needs Discussion).
b) Created two new classes:

  • A dataset class for storing shapes, where each shape is represented as a dictionary of tensors.
  • A class for handling pairs of shapes, which I believe is the most natural structure for training data.

DESCRIPTORS
I developed the LearnedDescriptors class, implementing versions using DiffusionNet and PointNet. This class must:

a) Contain trainable parameters (torch.nn.Module) and accept tensors as input.
b) Function as a Descriptor, ensuring compatibility with other descriptors in a descriptor pipeline, while taking TriMesh as input.
To achieve this, the LearnedDescriptors class includes both call and forward functions.
Additionally, I implemented conditional checks to determine whether the input is a dictionary or a triangular mesh (Needs Discussion).

🚨 Important Note: Discussion needed on issue [#15].
For now, I’ve kept descriptor dimensions as k×N in the original pipeline and N×K in the deep functional map. This isn’t ideal and may need revision.

FORWARD FMAP
I implemented the functional map with a forward pass, which computes a functional map from descriptors. This module needs to be a torch.nn.Module.
The current implementation follows prior work on deep functional maps. However, we should discuss standardizing the optimization approach across different implementations, as they rely on varying assumptions.

LOSSES
I created a dedicated file to store all loss functions. Each loss is
a) Registered in a Loss Registry.
b) Called dynamically by a Loss Manager (similar to FactorSum in functional map optimization).
c) Defined in a configuration file, as demonstrated in the DEMO notebook.
This structure allows new loss functions to be implemented seamlessly—just register them and define them in the config file.

MODEL
With all components in place, I created a model.py file to define different model implementations by combining existing modules.

Example models:

VanillaFMNet = Descriptor + ForwardMap
ProperFMNet = Descriptor + ForwardMap + Permutation + Map
This modular approach makes it easy to mix and match components, adjusting interactions to produce the desired outputs (functional maps and/or permutations).

TRAINER
I developed a Trainer to integrate everything. The trainer:
a) Reads a configuration file.
b) Loads the model and loss functions defined in their respective files.
c) Is not meant to be modified by users or developers.

USAGE
At inference time, we can use the trained model directly or Export learned descriptor weights for external use or combination with other descriptors.
The core design philosophy is:

If someone wants to modify the model, they only need to implement a new Model by combining existing modules and defining loss functions.
If someone just wants to use a model as-is, they only need to provide the data and configuration, then call the trainer.

NEXT STEP
Several key developments are still required:

Supervised Learning: No current mechanism for handling labels.
Evaluation Code: Awaiting distance metrics for implementation.
Final Notes
The code is still in a draft stage, with inconsistencies in naming and alignment with previous versions. However, my goal was to accelerate the project’s progress and establish a strong starting point.

@luisfpereira
Copy link
Owner

Great job @gviga!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants