-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mask, moving_mask, and mask_all_stages #723
Comments
You need two separate binary mask images. The mask for the Using masks requires good initialization - if the foreground of the fixed and moving images don't overlap well, the registration can fail. You can do an initial rigid or affine registration without the masks, then use that transform to initialize a registration with the masks. |
I am posting the two images I have from two platforms. As you can see each of them has areas where there is no data thus there is no need to to register one to another, and I am only interested in the overlap area. so I want to get the combined areas that I have in both, so I develop a mask that captures what is non-zero in both of them. vnir != 0 OR swir !=0. this resulting mask would be used for both fixed and moving. this logic makes sense, if I treat them separately it is going to push areas that exist in moving (SWIR) but not the in fixed (VNIR). previous to this I performed manual coregistration by hand, so the pixels are very close, using elastic registration to compensate for areas that shadows and heights are perceived differently. |
OK. If the images are pre-aligned, then it might make sense to use the same mask for both. You can probably just use mask in this case and dispense with moving_mask. Note, however, that all ANTs registration is done in the physical space. When you convert numpy to antsImage it assumes some defaults for physical space, and you have to ensure that your images and masks have the correct information. Also, you will need to add an initial_transform argument. If you've done the initial alignment by hand, you would add I don't know what method you are using for |
fantastic, thank you very much! |
The |
hey there, I am doing very simple coregistration, between two datasets that I have, VNIR and SWIR. in each dataset there are parts that are no-data-value=0. initially I would get the compounding map and set those to zero, which would give me correct results, but I want to exclude those parts from metric calculation as setting them to zero would not do it. However, the code seems to fail and would not do a correct job moving one to the other. I am assuming the data type of the mask I need to pass to the image is a binary, 1 for where I want calculation to be done, 0 for where I don't want it to be done. correct? Am I missing something?
thank you
output by simply setting zeros in the image
![image](https://private-user-images.githubusercontent.com/35879739/383358210-dfc48367-3de8-4776-b1e5-c460e4e063af.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyODI0MjAsIm5iZiI6MTczOTI4MjEyMCwicGF0aCI6Ii8zNTg3OTczOS8zODMzNTgyMTAtZGZjNDgzNjctM2RlOC00Nzc2LWIxZTUtYzQ2MGU0ZTA2M2FmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDEzNTUyMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTY1ODcxNDU4NjcyNGQ3MzY0ZDE1YmYyZTliNjFmZDgyMDJmMmQyMWRkYWE1ZWQ3NDhlYTljNDMzMmVkZGJjMjYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.Z66UL_wmv8MclqRlA_sF8eNcJHAZWZZ1Sn0iugE7tBQ)
output by passing mask
![image](https://private-user-images.githubusercontent.com/35879739/383358270-ca922a91-e320-454f-8052-e44483733bbe.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyODI0MjAsIm5iZiI6MTczOTI4MjEyMCwicGF0aCI6Ii8zNTg3OTczOS8zODMzNTgyNzAtY2E5MjJhOTEtZTMyMC00NTRmLTgwNTItZTQ0NDgzNzMzYmJlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDEzNTUyMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNkOTg5OTA5OTIyYjZhYzUxMDZiYmY4NmVkN2IyNDcyOGI5OTZlMmVhMDQyNzMxNTEzMGZjZDU3MTA2MTIwOWEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.J8kKpCDsn2zSuFVULlTfnnHukgrvdHzK40A3XQon6UU)
code:
final_mask = np.invert(np.logical_or(mean_swir_uint8 == 0, mean_vnir_uint8 == 0)).astype(np.uint8)
The text was updated successfully, but these errors were encountered: