I'm clearly missing something here. Why all the fuss about multi-aspect sensors? I could understand if by some magical way all of a sensor's pixels could be shifted to the the new format, but that's not true. It seems like it's just a crop that the the camera does at exposure time. With all the fuss abut raw processing, etc, this seems like a very very minor PP step that any software can do. I'm not trolling here; I just don't get it and want to know what I'm missing.