Some thoughts on AI processing and M43.....

Joined
Apr 20, 2020
Messages
1,434
Location
Beaumaris, Melbourne, Australia
AI is still new, but AI is already becoming more accurate at making medical diagnosis than doctors. Imagine where it'll be in 10 or 20 years. https://www.medicalnewstoday.com/articles/326460#:~:text=More specifically, the analysis found,with humans' at 91%.
Having been discharged from one of the best cardiac hospitals in the southern hemisphere (under the head of cardiology) after a 6 day stay, with an incorrect diagnosis, I fully understand.

I went home. Found the Nottingham University online cardiac nursing course. Read it thoroughly. Made my own diagnosis. Went to my GP. He referred me to the head of cardiology at another major hospital.

Within 10 minutes at my first consultation he had confirmed my diagnosis, and refined it a bit (what a surprise!!).

After a cardiac ablation operation and a 2 lead pacemaker, I'm as good as second hand ... :rofl: .

Competence and paying attention helps ...
 
Joined
Dec 6, 2015
Messages
1,199
Location
Brisbane, Australia
Real Name
Angus
Having been discharged from one of the best cardiac hospitals in the southern hemisphere (under the head of cardiology) after a 6 day stay, with an incorrect diagnosis, I fully understand.

I went home. Found the Nottingham University online cardiac nursing course. Read it thoroughly. Made my own diagnosis. Went to my GP. He referred me to the head of cardiology at another major hospital.

Within 10 minutes at my first consultation he had confirmed my diagnosis, and refined it a bit (what a surprise!!).

After a cardiac ablation operation and a 2 lead pacemaker, I'm as good as second hand ... :rofl: .

Competence and paying attention helps ...
Doctors rely on experience, medical training and comprehension. If their memory fails them, or they're relying on a 'hunch' that could be problematic.

AI relies on logic and only logic. No hunches or previous experiences to get in the way.
 
Joined
Apr 20, 2020
Messages
1,434
Location
Beaumaris, Melbourne, Australia
Doctors rely on experience, medical training and comprehension. If their memory fails them, or they're relying on a 'hunch' that could be problematic.

AI relies on logic and only logic. No hunches or previous experiences to get in the way.
That can actually create many problems for AI, Angus.

I made detailed notes of what appeared on the 24h/day monitor I was attached to. Shame that none of the staff or the cardiologist took much notice of that data. It showed a third degree atrio-ventricular block.

My new cardiologist was interested in my back of an envelope notes.

With complex organisms, like humans, hunches are a summation of a lifetime of experience.

I diagnosed a mycobacterium ulcerans infection a friend had before the doctors did. Mainly because I had seen one before, and there is nothing else quite like it.
 
Joined
Dec 6, 2015
Messages
1,199
Location
Brisbane, Australia
Real Name
Angus
That can actually create many problems for AI, Angus.

I made detailed notes of what appeared on the 24h/day monitor I was attached to. Shame that none of the staff or the cardiologist took much notice of that data. It showed a third degree atrio-ventricular block.

My new cardiologist was interested in my back of an envelope notes.

With complex organisms, like humans, hunches are a summation of a lifetime of experience.

I diagnosed a mycobacterium ulcerans infection a friend had before the doctors did. Mainly because I had seen one before, and there is nothing else quite like it.
Do some research into how AI is improving medical and biomedical research. It's fascinating stuff.
 

agentlossing

Mu-43 Hall of Famer
Joined
Jun 26, 2013
Messages
4,594
Location
Oregon USA
Real Name
Andrew Lossing
This is why ILC:s are losing this battle. They still try to capture one image as accurately as possible with this lens and these setting and let user do all the post processing if they so desire. In-body HRD is pretty much the only computational photography feature that actually works today
But look at the way Panasonic has figured out things like pre-burst and post focus in their 4K/6K modes. They are essentially doing the same thing Google does with the pixel cameras, capturing data before you press the button and stacking lots of frames, although Panasonic's implementation is predominantly leaving them "unstacked." Why, incidentally, Panasonic doesn't offer an IQ mode in their 4K and 6K settings to automatically stack lots of captured images to reduce noise, like the Pixel does, I dunno.

The thing is, the phones are capitalizing on the constant readout of sensors. Mirrorless camera sensors obviously aren't constantly reading all of the pixels to provide LCD and EVF displays, but I think an area to advance would be to capture some of the sensor data and process it in to the full-res file that is recorded when you press the shutter. Essentially using the full-res file as a reference and then stacking the lower-res files on top of it should theoretically be enough to reduce noise and enhance detail in some of the same ways as the Pixel phones. All this would need (aside from smart AI type programming, of course) would be sufficient readout speed and buffer to get decent information and have it available to stack into the full image.

Or, you have vertically stacked photodiodes, like the Foveon sensor, which captures different levels of pixels from each stack, and combined them into a single file. Sigma and their implementation are obviously a bit behind in terms of overall potential, though their results under ideal conditions are pretty impressive. It if this was part of the array of computational features, it could come into use. Even some combination that included handheld pixel shift. I think all of the necessary concepts exist, they just have to be pursued, and I think camera manufacturers, including Panasonic and especially Olympus, aren't in the position to throw tons of money at R&D, since they have already fallen behind. And of course, the fact that I can get nearly all of Google's Pixel mojo in a phone that only cost me $279 (Pixel 3a) doesn't make the plight of camera brands any easier.
 
Joined
Apr 20, 2020
Messages
1,434
Location
Beaumaris, Melbourne, Australia
Give it time, as I said, it's still new tech.
Angus, the comments of the head of Nissan's autonomous vehicle unit make for an interesting view.

One question that needs to be answered is how to program a car to selectively break a law? Value judgements that are routinely made by humans are exceptionally difficult for machines.

We really need to go back to Asimov ...
 

piggsy

Mu-43 All-Pro
Joined
Jun 2, 2014
Messages
1,572
Location
Brisbane, Australia
TBH as much as I enjoy the deep learning neural network tech it's kind of pushing even further into the same cul-de-sac in a way: cameras were already so much better than the display technology for pictures that it's essentially an issue of, "what wildly inappropriate shooting parameters can the user use and still get a salvageable shot"? You can simulate a noise-less image or one at higher resolution or depth as much as you like, but for something shot at an appropriate focal length and light level, etc, it's now potentially recording 10x over what even a very good consumer grade OLED/HDR screen can display rather than 4x.

Great for saving shots that otherwise don't work or where you don't have a flash or that were shot at a time when display/sensor tech was much worse, but otherwise...
 
Joined
Aug 9, 2017
Messages
1,252
Location
Rankin Inlet, Nunavut
Angus, the comments of the head of Nissan's autonomous vehicle unit make for an interesting view.

One question that needs to be answered is how to program a car to selectively break a law? Value judgements that are routinely made by humans are exceptionally difficult for machines.

We really need to go back to Asimov ...
The car doesn't break the law. The programmer does.

That's the quandary, moral, legal, philosophical, economic.
 

BDR-529

Mu-43 Veteran
Joined
Jun 27, 2020
Messages
377
But look at the way Panasonic has figured out things like pre-burst and post focus in their 4K/6K modes. They are essentially doing the same thing Google does with the pixel cameras, capturing data before you press the button and stacking lots of frames, although Panasonic's implementation is predominantly leaving them "unstacked."
Unfortunately there's no intelligence whatsoever in Panasonic implementation. They just record a very short 4k video clip with each frame focused at slightly different distance and that's it. User has to browse through these manually and select the one he likes the best.

Only in-body focus stacking and HDR actually do merge several images into one but neither requires intelligence per se.

Camera records data about which areas of each shot were in focus and just mechanically combines these from several images into one. HDR is basically the same with the exception that in this case camera knows that this area in this shot is exposed correctly, let's cut it out and combine with other areas from other shots. It's just mechanical cut-and-paste of pixels from here to there without any need to understand what exactly is in this picture.
 
Last edited:

Robstar1963

Mu-43 Hall of Famer
Joined
Jun 10, 2011
Messages
3,293
Location
Isle of Wight England UK
Real Name
Robert (Rob)
Having been discharged from one of the best cardiac hospitals in the southern hemisphere (under the head of cardiology) after a 6 day stay, with an incorrect diagnosis, I fully understand.

I went home. Found the Nottingham University online cardiac nursing course. Read it thoroughly. Made my own diagnosis. Went to my GP. He referred me to the head of cardiology at another major hospital.

Within 10 minutes at my first consultation he had confirmed my diagnosis, and refined it a bit (what a surprise!!).

After a cardiac ablation operation and a 2 lead pacemaker, I'm as good as second hand ... :rofl: .

Competence and paying attention helps ...
Agree and reminds me of my own experience years ago
I had abdominal pain and having looked up the symptoms of Apendicitis decided that that was what I had and got someone to take me to A & E
I told them I HAD Apendicitis only to get one of those ‘oh yeh ok’ dismissive replies
After being seen by a couple of people I was told to go home and see how it goes - luckily my late mum was with me and she told them in no uncertain terms that I would not be going home
An hour or so later (Still at hospital Lying on a bed) I was throwing up and in excruciating pain
Apendicitis was then diagnosed !
They weren’t able to operate on me that afternoon/evening because they didn’t have an anaesthetist available
I was offered Morphine and at first turned it down as I’m not one for taking drugs or pharmaceuticals if I can avoid it but in the end the pain was so bad I had to give in - oh that was such a relief !
I was operated on the next afternoon after a decision was made not to use keyhole surgery and it turned out that I had a very gangrenous appendix which was removed.
Apparently it would have become peritonitis very quickly so thankful to my dear mum for keeping me there and my own initial diagnosis
Im sure that an AI interrogation would not have come to the conclusion that I should be sent home !
Ever since I’ve always looked up my or my family member’s symptoms before consulting the medical profession just to have some background knowledge and to see if I have some sort of hunch.
 

BDR-529

Mu-43 Veteran
Joined
Jun 27, 2020
Messages
377
I think computer science people would disagree with your interpretation:
the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. I would say things like replacing a sky would or the ability to change the way a person looks requires decision-making.
In image processing AI means in practice neural networks: "A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates."

This is the reason why for example AI denoise can create spectacular results: it can recognize shapes and patterns and "understands" what should and what should not be in this image. (exactly same neural networks are nowadays better than human doctors in recognizing anomalies in cardiograms and x-ray images because it's all about finding certain patterns in huge amounts of data)

Traditional denoising is based on a very straightforward principle: let's take one pixel at a time and compare it with neighbouring pixels to see if there's a difference in colour and brightness. If user-set treshold is exceeded, let's calculate an average and move this pixel closer to it by the amount user requests. As a result noise is reduced but finer details are lost as well because mechanical calculation can't see a difference between noise and details which actually are part of the image.

AI denoise is working differently. It first analyzes the image for patterns and shapes after which it knows that for example this here is an eyelash against white skin so there should be a huge difference in pixel colour and brigthness along this edge. On pixel level colour and brightness should be totally even on both sides of this black-tan edge but edge itself should be made even sharper. As a result you get evenly black and razor sharp eyelash against perfectly denoised skin.
 
Last edited:

Robstar1963

Mu-43 Hall of Famer
Joined
Jun 10, 2011
Messages
3,293
Location
Isle of Wight England UK
Real Name
Robert (Rob)
In image processing AI means in practice neural networks: "A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates."

This is the reason why for example AI denoise can create spectacular results: it can recognize shapes and patterns and "understands" what should and what should not be in this image. (exactly same neural networks are nowadays better than human doctors in recognizing anomalies in cardiograms and x-ray images because it's all about finding certain patterns in huge amounts of data)

Traditional denoising is based on a very straightforward principle: let's take one pixel at a time and compare it with neighbouring pixels to see if there's a difference in colour and brightness. If user-set treshold is exceeded, let's calculate an average and move this pixel closer to it by the amount user requests. As a result noise is reduced but finer details are lost as well.

AI denoise is working differently. It first analyzes the image for patterns and shapes after which it knows that for example this here is an eyelash against white skin so there should be a huge difference in pixel colour and brigthness along this edge. On pixel level colour and brightness should be totally even on both sides of this black-tan edge but edge itself should be made even sharper. As a result you get evenly black and razor sharp eyelash against perfectly denoised skin.
This must involve a huge amount of programming ?
 

BDR-529

Mu-43 Veteran
Joined
Jun 27, 2020
Messages
377
This must involve a huge amount of programming ?
It sure does involve huge amounts of processing power. Reasonably powerfull PC will easily spend nearly two minutes denoising a single 20MP RAW file. And designers must actually "teach" neural networks with huge amounts of actual data like images and cardiograms before they learn to recognize wanted patterns.

As a matter of fact, neural networks are so good at finding data patterns which are invisible to human eye that biggest problem with cardiogram or x-ray analysys is the huge amount of false positives they generate. Algorithms must be taught to ignore patterns below certain treshold, not to find them more effectively.
 

hannahntilly

Mu-43 Regular
Joined
May 22, 2011
Messages
42
Location
Surrey, UK
This reminds me of my post on Cameraderie in 2017. I'm amazed at Google's ability to identify objects in my photos. I'm a very lazy tagger of metadata in photos and generally only note the location or person. Searches in Google Photos for items such as "bicycle", "window", "sunset", "tree", "door", "windmill" give me very good matches against my catalogue of about 30k images. I have a background in computing and can't imagine how I would go about defining the rules for such image identification - machine-learning AI is very effective. Indeed, I did a search for "cake" earlier and that gave me some great matches. Here are the first 5 that Google found in my catalogue:


Subscribe to see EXIF info for this image (if available)

Subscribe to see EXIF info for this image (if available)

Subscribe to see EXIF info for this image (if available)

Subscribe to see EXIF info for this image (if available)

Subscribe to see EXIF info for this image (if available)


While I can understand how a machine might be able to identify #2, #3 & #5, I can't see how it got the other examples. A cake is a fairly nebulous concept in my mind and the other two aren't even cake-shaped - #1 looks (arguably) more like a camera and #4 was a decoration on top of a cake.
 

RS86

Mu-43 Top Veteran
Joined
Mar 26, 2019
Messages
733
Location
Finland
This reminds me of my post on Cameraderie in 2017. I'm amazed at Google's ability to identify objects in my photos. I'm a very lazy tagger of metadata in photos and generally only note the location or person. Searches in Google Photos for items such as "bicycle", "window", "sunset", "tree", "door", "windmill" give me very good matches against my catalogue of about 30k images. I have a background in computing and can't imagine how I would go about defining the rules for such image identification - machine-learning AI is very effective. Indeed, I did a search for "cake" earlier and that gave me some great matches. Here are the first 5 that Google found in my catalogue:


View attachment 855142
View attachment 855143
View attachment 855144
View attachment 855145
View attachment 855146

While I can understand how a machine might be able to identify #2, #3 & #5, I can't see how it got the other examples. A cake is a fairly nebulous concept in my mind and the other two aren't even cake-shaped - #1 looks (arguably) more like a camera and #4 was a decoration on top of a cake.
Pretty amazing. My caveman brain has an explanation on how it works: It's magic.

giphy.gif
Subscribe to see EXIF info for this image (if available)
 

BDR-529

Mu-43 Veteran
Joined
Jun 27, 2020
Messages
377
While I can understand how a machine might be able to identify #2, #3 & #5, I can't see how it got the other examples. A cake is a fairly nebulous concept in my mind and the other two aren't even cake-shaped - #1 looks (arguably) more like a camera and #4 was a decoration on top of a cake.
Remember that Google has access to million(s) of images which are tagged as "Cake" including artistic and not so artistic ones to teach their neural network about properties of said item.

In this case properties might be as simple as
- it looks like play doh sculpture
- and it's on table
- but there's a plate or (white) paper under it as protection
- or it stands on what looks like traditional cake (roundish thing etc)

Try searching for "play doh" or "sculpture" and check whether you get #1 and #4. At least in latter case you should becaue those are sort of sculptures.
 

RichardC

Mu-43 Hall of Famer
Joined
Mar 25, 2018
Messages
3,560
Location
The Royal Town of Sutton Coldfield, UK.
Real Name
Richard
This reminds me of my post on Cameraderie in 2017. I'm amazed at Google's ability to identify objects in my photos. I'm a very lazy tagger of metadata in photos and generally only note the location or person. Searches in Google Photos for items such as "bicycle", "window", "sunset", "tree", "door", "windmill" give me very good matches against my catalogue of about 30k images. I have a background in computing and can't imagine how I would go about defining the rules for such image identification - machine-learning AI is very effective. Indeed, I did a search for "cake" earlier and that gave me some great matches. Here are the first 5 that Google found in my catalogue:


View attachment 855142
View attachment 855143
View attachment 855144
View attachment 855145
View attachment 855146

While I can understand how a machine might be able to identify #2, #3 & #5, I can't see how it got the other examples. A cake is a fairly nebulous concept in my mind and the other two aren't even cake-shaped - #1 looks (arguably) more like a camera and #4 was a decoration on top of a cake.
Have done number 5.

40 odd quid a head for tea, sarnies, cake and a man on the piano.

Fantastic value for money and an experience which I am looking forward to repeating.
 
Links on this page may be to our affiliates. Sales through affiliate links may benefit this site.
Mu-43 is a fan site and not associated with Olympus, Panasonic, or other manufacturers mentioned on this site.
Forum post reactions by Twemoji: https://github.com/twitter/twemoji
Copyright © 2009-2019 Amin Forums, LLC
Top Bottom