You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We do this mapping almost everywhere. If we want to use different ranges for the img[] and sig[] arrays, using transcode as a bottleneck would mean we only have to override these two methods to change behavior in all methods that map audio to RGB or RGB to audio. The overhead of a method call should be minimal. SO, I guess the question is in what circumstances would overriding be useful?
Perhaps we want to make the library only work for RGB [0, 255] and audio (-1.0..1.0)? Could it easily handle 16 bit channels, for example, or a much wider signal amplitude range, with just a change to transcode?
Well, no. Bitshifting is also used throughout the RGB math, but that could be overridden too, I think. Changing the audio range would be simpler. For example, we could handle 5 volt peak-to-peak signals without normalizing them to audio range. Is that useful?
The text was updated successfully, but these errors were encountered:
I recommend creating a branch with transcode() (or renamed transcodeInt(), transcodeFloat() methods) used as a bottleneck to see how well it works. Not a high priority, but should be tried before releasing the library. How does it affect speed? How useful is it really?
(copied from PixelScanner Issues)
Should the transcode(int) and transcode(float) methods be called everywhere that transcoding happens in other methods in PixelAudioMapper?
We do this mapping almost everywhere. If we want to use different ranges for the img[] and sig[] arrays, using transcode as a bottleneck would mean we only have to override these two methods to change behavior in all methods that map audio to RGB or RGB to audio. The overhead of a method call should be minimal. SO, I guess the question is in what circumstances would overriding be useful?
Perhaps we want to make the library only work for RGB [0, 255] and audio (-1.0..1.0)? Could it easily handle 16 bit channels, for example, or a much wider signal amplitude range, with just a change to transcode?
Well, no. Bitshifting is also used throughout the RGB math, but that could be overridden too, I think. Changing the audio range would be simpler. For example, we could handle 5 volt peak-to-peak signals without normalizing them to audio range. Is that useful?
The text was updated successfully, but these errors were encountered: