It's always some oversold flimflam. The field is exciting on the multiple-decades timeline, but in the shorter term the headline hype for shit like this is going to be insufferable.
It's not suspicious it just can't do that. It's not weird if you are a scientist, the paper is clear on what it is trying to do. It is the popular science article that purposefully misconstrues it for clicks.
I assume they would need to start on something like this to train the ai what patterns mean what.from the maybe they can start learning to actually see
I haven't read the paper either, so someone who has can correct me. But I'm gonna toss out some basic heuristics here to consider.
>It would have been very simple to use new images not in the dataset.
If it's simple to do something, especially simple enough for someone on the side to consider, then you can almost always presume that the people who are working on it have considered that and even mention it themselves. So I'd guess this point is covered in the paper. Scientists generally explain why they do things in their papers. That's part of the standard publication outline.
You could have said, *"I'm* **curious** *why they didn't do that,"* and then either read the paper to find out, or let someone respond who has. I'm not actually sure what can be considered *suspicious* here. That word choice is what's throwing me off and prompting my response. I don't know how that concept applies here.
Even in the case of media headlines not clarifying this, that's just strategic status-quo, hyping up research beyond its scope.
I just went ahead and read the research, It's an iterative improvement on their previous work. The previous work was mainly about trying to find some encoding that aligns with brain activity, and it also generates the images. This paper adds PAM(basically attention to different brainwave parts?) to it and the generated images gets higher quality.
[https://thirzadado.com/work/pam/](https://thirzadado.com/work/pam/)
to answer "It's suspicious they didn't do that": encoding and decoding from brain wave is less studied and I assume that it doesn't work as well with new images so they can't publish a paper. The research still showed novel ideas but it has major flaws and limitations unlike what the clickbait title suggests.
It's a very new area so your expectation should be on par with things OpenAI made back in 2017. Have you ever tried gpt-2, or even gpt-1? I have when I was in school studying them. They were godawful but it's a stepping stone towards what we have now.
This is just citation inflation and is very prevalent in CS papers, their iteration doesn’t justify a new publication without exploring the stated issues. Academia want to tell good stories, and they claim failure doesn’t fit into the story. So it’s the attitude that’s the problem, the attitude to hide failure.
Exactly. The conspiratorial tenor of this sub is such a fucking killjoy it eliminates any chance for a good conversation because every thread is full of 50 chucklefucks posting the same "such and such has financial incentive of course he's saying good things about it", "this CEO/scientist said this 14 years ago now he says this. Satan!".
It fucking sucks.
It is pretty normal to be sceptical of any paper. We have the peer review filter in place but that's hardly fool proof.
Add on the sensationalist reporting that fails to mention a pretty big caveat (that the reconstructing network has the input images in its training) and yes, I think the gut reaction to be suspicious in that sense was correct.
Of course, researchers can't be blamed for sensationalist reporting, so it'd be reasonable to stop at saying the reporting could have been more balanced without immediately throwing shade at the researchers.
I also wonder long term how problematic this obstacle is.
Most things you can see in the world are covered in some form in image databases.
For small image sets getting the image right is far less impressive, but I don't think real world applications will have or need completely image agnostic interpretation networks.
Unless the aliens show up and we want to send in a monkey to see what something totally unknown looks like the approach taken would probably scale quite well.
The original image has three rows, the third one being without the target pictures. It still looked good, but not as impressive. But a great starting point.
Interesting, that makes a big difference. It's basically no different than when someone scans your brain and matches the exact pattern. They do it with letters as a proof of concept. This isn't of much use if it only works on the same exact images.
Yeah I haven't read the latest on brain reading technology, but a few years ago at least, mind reading was possible to the point where you could think of a word and the mind reader would know the word with something like 80% certainty. HOWEVER the massive caveat is that the brain prediction AI had to be trained on that specific person's brain imagery. If you used brainwave patterns from another person than the one it was trained on, it couldn't get a read at all. Basically it looks like individual humans dont use the same language inside their brains.
So probably this AI can't really read images from the brain that it wasn't trained on. Images annotated with a specific monkey's brain readings.
>Basically it looks like individual humans dont use the same language inside their brains.
I know what you're trying to say here, and in that context you're absolutely right. Humans encode concepts between their neurons in unique ways that don't map human to human.
I still think we'll discover that there's a shared "logical language" used as a transmission protocol in the electrical signal itself.
Seems like that’s the way you’d do this though. You train on all available images and then reconstruct based on which image you perceive the animal to be looking at.
It’s still an incredible decoding of the brains activity and could lead to eventual techniques that don’t require known datasets.
What? Pretty sure every brain is working on the same base level biology. No way each is a bespoke implementation and yet completely sharing the exact same experiences.
We are not sharing the exact same experience, and if you would be able to prove that we did, there likely would be a nobel prize in it for you. Unique signal structure results in unique activation patterns - it's like a massive multidimensional fingerprint (because there is a time component and intensity to activation as well, not just 3 spatial dimensions)
you might not be familiar with how diffusion image generation works
it's pretty much the same as the rehash of: https://stability.ai/research/minds-eye
which the paper was published in 2023: https://arxiv.org/abs/2305.18274
but i'm pretty sure some other version was already online in 2022 when i was making an inhouse presentation
all of it is cool stuff :)
Ah that’s a different ballgame. Gotta read up on this.
Last I heard about these attempts, it’s really hard to take neuron signals, and turn those into pictures. But we were getting more accurate.
It's not doing that. They made a model of an individual's brain activity when observing a specific image, after this they run the images and signals to create an attribution model and at the end they show the stimulus and recreate the images based on the signals. The model isn't perfect. Brains aren't perfectly repetitive either, and neither do subjects have perfect focus even if sensorially depravated.
Basically, they are by no means turning neuron impulses into images. They're using data similar to an EKG and do an augmented image retrieval based on it.
Edit: still very impressive, just not what you were saying
Ok, if it's same image dataset and same person as is also in the train dataset it's sus, but it it's a different person it means it can at least generalize in that to others. That's getting kinda creepy already.
I've been following this quite a bit because I created a similar project about three or four years ago. Test subjects were able to recall certain images that were already mapped. The real fancy stuff comes from MRI reconstruction.
No, they used a training set and a separate (held-out) test set, as they state in the paper:
>... In total, B2G consists of 4000 and 200 training (1 repetition) and test (20 repetitions) examples, respectively
...
>GOD consists of 1200 and 50 training (5 repetitions) and test (24 repetitions) examples, respectively. ...
>With the direct recordings of brain activity, some of the reconstructed images are now remarkably close to the images that the macaque saw, which were produced by the StyleGAN-XL image-generating AI. However, **it is easier to accurately reconstruct AI-generated images than real ones**, says Dado, as aspects of the process used to generate the images can be included in the AI learning to reconstruct those images.
Thought Police not coming until later update, then.
> 70-80%+ accuracy.
7-8% you mean?
Go look at the papers man, sh!t looks blurry asf and half the time cannot be interpreted as anything at all.
70 80% lol you wish
you're literally commenting under a post about another research with even better results
semantic decoding works great
I understand if you're anxious about mind reading tech, but the current methodology involves hours of EEG scans and than fine-tuning the deep learning model to the subject's brain signals
this method is highly reproducable and reliable, but it won't happen without the subject noticing so as it stands now, you shouldn't worry about it
> I understand if you're anxious
We can already read minds with EGGs we can only already hear people with internal monologues (their vocal cords move slightly) and soon we are going to have 99.9% accurate lie detectors that will change how society works forever (a post-lie society).
But this ain't it. If you go to the paper and LOOK AT THE RAW IMAGES come back and tell me that's 85% accurate to what the animal saw... Come the f on bro
Put a notepad on your nightstand and make the first thing you do each morning is write a quick summary of your dream. The dream memory will fade unless you do something like this.
They're able to do this on monkeys, now? I will definitely read this in a bit!
Edit: Apparently they have been doing this for a while with monkeys now, among other animals that weren't named. Crazy to be honest.
Yeah, seems that the AI 'knows' which region of the brain to look at for a mor accurate prediction. I love that they are using AI generated images to show the monkeys. I got a chuckle from that, idk why.
Yeah, gotta look this up. Ten years ago neuroscientists were saying they could only project some of the colors, and the pictures were inaccurate like 80% of the time. This would be a hugeee leap.
The paper the article's based on: [https://www.biorxiv.org/content/10.1101/2024.06.04.596589v1](https://www.biorxiv.org/content/10.1101/2024.06.04.596589v1)
Could you imagine being hospitalized for something and then the doctor walks in and says “OK Mr. such and such, we got some results back. You’re charts look normal, but let’s talk about that time you thought wood varnish smelt good”….
Or... "So we've brought you in for questioning in relation to X, we suspect you have committed this crime and this court warrant says we can interrogate your mind to find proof"
Don't worry, we'll willingly sign the terms and conditions to our new Neuralinks when our employer requires us to wear them 24/7! (technically we'll have a choice, it'll be job or no job.)
no we cant because the scientists in this experiment went with a huge bias into it, e.g. they knew exactly what result the wanted and constructed the experiment to deliver that result. As we don't know what bats see, we can't construct an experiment around the final result.
You know how some people were saying birds are stealth surveillance, and we were like, haha, what a crazy idea.
5G enabled neuralink + AI decoder = birds are CCTV now.
That monkey suffer from bad vision.
What if this would be possible on human brains…
https://www.science.org/content/article/ai-re-creates-what-people-see-reading-their-brain-scans
[edit: article from 7 MAR 2023]
I don't know how people can be so optimistic about the future with stuff like this. This is creepy/borderline evil. Between this ("mind reading" technology), transhumanism, gene editing, "longevity", AI, I see no reason to be optimistic about the future. The future is looking really dystopian already, and we've only just begun. The AI era started only 2 years ago.
study shows brain remembers everything but cannot recall it
does this mean I can see my entire life I have survived till now as a movie in the later updates?
This type of thing is more possible than most people realize, often in science that isn't exactly making headlines, but this doesn't really make sense. I'd like to see what the data looks like before going through the AI.
This is (or is similar to) what Neuralink does. They have them implanted in monkeys, have them do things like play pong with/without paddles, record and feed the data into ML models, and they can then (re)create the motions.
Am I the only one who finds it completely useless to use AI upscaling with this? It only allows us to see better comparisons (which may looks better for the general public who are impressionable) but for all of us familiar with AI images I think this use of generative AI seems like it makes things worse.
You want the raw data with a human interpreting, not an AI upscaled that makes stuff up like a double headed fish and invents the background as it goes.
These companies are grossly negligent in overselling the abilities of these products at this stage for fundraising purposes. There’s a major hurdle of AI where the intelligence is able to educate itself without human clarification.
Take dreams for example. Can you recall your dreams with as much detail as needed to train an AI so it can connect the dream to the recorded signals? You would have to have such clear, lucid recall that you would essentially be awake and talking in real time to tell it what you are seeing to get any meaningful analysis or video out of this.
The true money play is to figure out how to transmit propaganda to a target population.
If it becomes really accurate, imo, there is no discussion anymore that there is no free will. If thoughts are matter that can be scanned we are not as unique as we thought. Completley deterministic.
Talk me through your thought process here. Why would you have thought that the function of the brain wasn’t material? Why does this answer the question of determinism in your thinking?
Imo anyone who is educated but thinks humans aren't as deterministic as everything else in the world needs to drop their ego and think about reality a bit through an objective lens.
I think there's nothing intrinsically different about myself and the world around me when it comes to determinism.
If I was to go and eat some funky mushrooms or drink a beer my thoughts change purely because those compounds affect the physical processes occurring in my brain which are what actually drive my behaviour. We're complex, but I've seen 0 evidence to suggest we have free will or are anything but slaves to physics, as the rest of the universe around us. Our behavior is determined by the structure of our brain and data which enters through our senses. Evolutionary forces have done a great job convincing us otherwise in order to drive us to making optimal decisions though :)
The point is: because the world and our biology are too complex and chaotic to know all the details of the present state of the world and our brains, there is no way to predict our thoughts or behavior so the point of whether our behavior is deterministic is moot.
The point is: something being complex has never stopped it being discussed or researched, not sure why science would be afraid to push and discuss the frontiers of our knowledge. How complex or currently unknown something appears to be is moot.
Do you realize what sub this is? Many people here believe that in the foreseeable future we'll have a singularity caused by ASI. These are the exact types of scientific knowledge which would become accessible in the case of a singularity occurring.
I also believe we could seek to use brain organoids to prove the deterministic nature of our brains.
Apologies! I was not trying to say the issue shouldn't be discussed. I was trying to suggest that the world and our brains are much too complex to reduce our behavior to the simple binary, either we have free will or we don't. In the show Westworld, when asked if they are real, the robot answers, "If you can't tell, does it matter?". I thought this idea had some resonance with the idea that even if our behavior is deterministic yet based on conditions that are too complex to ever fully know or predict, does it matter? Maybe it was a stretch. Anyway, you might find this interesting: http://www.robertsapolskyrocks.com/chaos-and-reductionism.html
Oh yeah for sure, my opinion that the cognitive processes which control my life are fully deterministic does not mean I chose to live it any differently in practice - I have a purely ecocentric pov. I also don't think we'll be fully predicting these processes within my lifetime unless we see a hard takeoff scenario.
I'm familiar. Of course, I'm not sure even that affects free will, for it to do so presumes there is nothing under the hood so to speak that underpins the thought.
Thoughts are not matter, they’re electrical impulses. And the fact that they can be measured says nothing of whether or not they are produced deterministically.
The fact that this technology itself is stochastic sort of goes against what you’re claiming.
The universe is essentially simulating everything in existence and it seems to use whatever’s already there to create the next state. Now the determinism argument suggests that each state is wholly knowable - at least theoretically - and thus leads only to one possible next state. But what we know of reality tells us that each state - even each little part of each state - is truly infinite. It is impossible to know infinity and an infinite state can’t be used to construct another infinite state deterministically. So what happens instead? Probability enters the picture. Full clarity is an illusion but an orderly fuzziness is possible. So that’s what happens.
Tldr: the universe is an infinite probabilistic simulation of every possible state of existence
Serious Question: What do we do with AI mind reading technology?
What are the applications? I'm sure there are some outside of fringe medical use cases, but can't readily see a general use application for something to tell you what you're looking at.
This is an interesting take. I could definitely see therapists in a sleep clinic watching a patient's dreams in real time. That could truly provide some insight.
OP you straight up did not read the article or something because both of those rocking chairs are super fucked up and both ai. If you’re going to post slop on the sub at least get your shit straight
A bit of cold water here. Reading through the paper it looks as though the AI reconstructions were done with the target images already in the dataset.
It would have been very simple to use new images not in the dataset. It's suspicious they didn't do that.
It's always some oversold flimflam. The field is exciting on the multiple-decades timeline, but in the shorter term the headline hype for shit like this is going to be insufferable.
And building up expectations of the masses like this will probably create a big let down that they will say that the hype bubble has burst
The amount of money being used to fund this nonsense is so depressing.
It's not suspicious it just can't do that. It's not weird if you are a scientist, the paper is clear on what it is trying to do. It is the popular science article that purposefully misconstrues it for clicks.
I assume they would need to start on something like this to train the ai what patterns mean what.from the maybe they can start learning to actually see
probably because the results would not have been as good.
yeah
Gotta get that paper published...Publish or perish as they say.
I haven't read the paper either, so someone who has can correct me. But I'm gonna toss out some basic heuristics here to consider. >It would have been very simple to use new images not in the dataset. If it's simple to do something, especially simple enough for someone on the side to consider, then you can almost always presume that the people who are working on it have considered that and even mention it themselves. So I'd guess this point is covered in the paper. Scientists generally explain why they do things in their papers. That's part of the standard publication outline. You could have said, *"I'm* **curious** *why they didn't do that,"* and then either read the paper to find out, or let someone respond who has. I'm not actually sure what can be considered *suspicious* here. That word choice is what's throwing me off and prompting my response. I don't know how that concept applies here. Even in the case of media headlines not clarifying this, that's just strategic status-quo, hyping up research beyond its scope.
I just went ahead and read the research, It's an iterative improvement on their previous work. The previous work was mainly about trying to find some encoding that aligns with brain activity, and it also generates the images. This paper adds PAM(basically attention to different brainwave parts?) to it and the generated images gets higher quality. [https://thirzadado.com/work/pam/](https://thirzadado.com/work/pam/) to answer "It's suspicious they didn't do that": encoding and decoding from brain wave is less studied and I assume that it doesn't work as well with new images so they can't publish a paper. The research still showed novel ideas but it has major flaws and limitations unlike what the clickbait title suggests. It's a very new area so your expectation should be on par with things OpenAI made back in 2017. Have you ever tried gpt-2, or even gpt-1? I have when I was in school studying them. They were godawful but it's a stepping stone towards what we have now.
This is just citation inflation and is very prevalent in CS papers, their iteration doesn’t justify a new publication without exploring the stated issues. Academia want to tell good stories, and they claim failure doesn’t fit into the story. So it’s the attitude that’s the problem, the attitude to hide failure.
What? They have something new in this paper, and it is a continuation of their previous work.
Exactly. The conspiratorial tenor of this sub is such a fucking killjoy it eliminates any chance for a good conversation because every thread is full of 50 chucklefucks posting the same "such and such has financial incentive of course he's saying good things about it", "this CEO/scientist said this 14 years ago now he says this. Satan!". It fucking sucks.
It is pretty normal to be sceptical of any paper. We have the peer review filter in place but that's hardly fool proof. Add on the sensationalist reporting that fails to mention a pretty big caveat (that the reconstructing network has the input images in its training) and yes, I think the gut reaction to be suspicious in that sense was correct. Of course, researchers can't be blamed for sensationalist reporting, so it'd be reasonable to stop at saying the reporting could have been more balanced without immediately throwing shade at the researchers. I also wonder long term how problematic this obstacle is. Most things you can see in the world are covered in some form in image databases. For small image sets getting the image right is far less impressive, but I don't think real world applications will have or need completely image agnostic interpretation networks. Unless the aliens show up and we want to send in a monkey to see what something totally unknown looks like the approach taken would probably scale quite well.
The original image has three rows, the third one being without the target pictures. It still looked good, but not as impressive. But a great starting point.
Interesting, that makes a big difference. It's basically no different than when someone scans your brain and matches the exact pattern. They do it with letters as a proof of concept. This isn't of much use if it only works on the same exact images.
Yeah I haven't read the latest on brain reading technology, but a few years ago at least, mind reading was possible to the point where you could think of a word and the mind reader would know the word with something like 80% certainty. HOWEVER the massive caveat is that the brain prediction AI had to be trained on that specific person's brain imagery. If you used brainwave patterns from another person than the one it was trained on, it couldn't get a read at all. Basically it looks like individual humans dont use the same language inside their brains. So probably this AI can't really read images from the brain that it wasn't trained on. Images annotated with a specific monkey's brain readings.
>Basically it looks like individual humans dont use the same language inside their brains. I know what you're trying to say here, and in that context you're absolutely right. Humans encode concepts between their neurons in unique ways that don't map human to human. I still think we'll discover that there's a shared "logical language" used as a transmission protocol in the electrical signal itself.
Seems like that’s the way you’d do this though. You train on all available images and then reconstruct based on which image you perceive the animal to be looking at. It’s still an incredible decoding of the brains activity and could lead to eventual techniques that don’t require known datasets.
It will absolutely always need to be purpose built for a specific brain because brains don't have a common "language".
What? Pretty sure every brain is working on the same base level biology. No way each is a bespoke implementation and yet completely sharing the exact same experiences.
We are not sharing the exact same experience, and if you would be able to prove that we did, there likely would be a nobel prize in it for you. Unique signal structure results in unique activation patterns - it's like a massive multidimensional fingerprint (because there is a time component and intensity to activation as well, not just 3 spatial dimensions)
“Always”. Lmao famous last words in ai.
I can always tell the technology people from the hopeful fanboys. Want to guess how?
> “Always”. Lmao famous last words in ai.
Ngl, this type of comment will always be based. :)))
I don’t get it. If the system knows it’s one of a set of options, why isn’t the output the original image?
you might not be familiar with how diffusion image generation works it's pretty much the same as the rehash of: https://stability.ai/research/minds-eye which the paper was published in 2023: https://arxiv.org/abs/2305.18274 but i'm pretty sure some other version was already online in 2022 when i was making an inhouse presentation all of it is cool stuff :)
I wish every post started with a comment like this.
Ah that’s a different ballgame. Gotta read up on this. Last I heard about these attempts, it’s really hard to take neuron signals, and turn those into pictures. But we were getting more accurate.
It's not doing that. They made a model of an individual's brain activity when observing a specific image, after this they run the images and signals to create an attribution model and at the end they show the stimulus and recreate the images based on the signals. The model isn't perfect. Brains aren't perfectly repetitive either, and neither do subjects have perfect focus even if sensorially depravated. Basically, they are by no means turning neuron impulses into images. They're using data similar to an EKG and do an augmented image retrieval based on it. Edit: still very impressive, just not what you were saying
Ok, if it's same image dataset and same person as is also in the train dataset it's sus, but it it's a different person it means it can at least generalize in that to others. That's getting kinda creepy already.
I've been following this quite a bit because I created a similar project about three or four years ago. Test subjects were able to recall certain images that were already mapped. The real fancy stuff comes from MRI reconstruction.
So they tested the same images on different people, or the same people and the same images from the data set?
Big difference. With how popular this sub has become, the level of clickbait has increased.
I love that people here dont understand what they tried to archive...
No, they used a training set and a separate (held-out) test set, as they state in the paper: >... In total, B2G consists of 4000 and 200 training (1 repetition) and test (20 repetitions) examples, respectively ... >GOD consists of 1200 and 50 training (5 repetitions) and test (24 repetitions) examples, respectively. ...
Thanks a lot Debbie downer
lmao
>With the direct recordings of brain activity, some of the reconstructed images are now remarkably close to the images that the macaque saw, which were produced by the StyleGAN-XL image-generating AI. However, **it is easier to accurately reconstruct AI-generated images than real ones**, says Dado, as aspects of the process used to generate the images can be included in the AI learning to reconstruct those images. Thought Police not coming until later update, then.
https://preview.redd.it/zn1g34cehiad1.jpeg?width=500&format=pjpg&auto=webp&s=cf3de20bdcf9b2018817de98909a6ff5f99c63df
Is a future when you'll be able to show in image/video form your dreams actually possible?
They've already done trials where they reconstructed dream images with 70-80%+ accuracy.
dreams which I cannot remember? what's the catch?
The catch is you remembered you kissed a family member for no apparent f'n reason....
Everyone will know you're a pervert
> 70-80%+ accuracy. 7-8% you mean? Go look at the papers man, sh!t looks blurry asf and half the time cannot be interpreted as anything at all. 70 80% lol you wish
https://arxiv.org/abs/2305.11675 "average accuracy 85%" cope and seethe (ツ)_/¯
"cope and seethe that researchers lie" cool beans. No wonder there is a replicability crisis with this level of honesty. And it's shills.
you're literally commenting under a post about another research with even better results semantic decoding works great I understand if you're anxious about mind reading tech, but the current methodology involves hours of EEG scans and than fine-tuning the deep learning model to the subject's brain signals this method is highly reproducable and reliable, but it won't happen without the subject noticing so as it stands now, you shouldn't worry about it
> I understand if you're anxious We can already read minds with EGGs we can only already hear people with internal monologues (their vocal cords move slightly) and soon we are going to have 99.9% accurate lie detectors that will change how society works forever (a post-lie society). But this ain't it. If you go to the paper and LOOK AT THE RAW IMAGES come back and tell me that's 85% accurate to what the animal saw... Come the f on bro
Sure, if you first recorded your brain as it watched the images your brain would later dream.
That will be awesome if we think about an scene and can be recorded. That means i can do a big production film only with my imagination
and loads of companies will be chomping at the bit to sell you what you dream about
That would be incredible. I'd love to relive the worlds I dream, because I barely remember them when I wake up.
Put a notepad on your nightstand and make the first thing you do each morning is write a quick summary of your dream. The dream memory will fade unless you do something like this.
This is how the Outer Gods gains access to and eventually invades our dimension.
That might just become possible
bro
They're able to do this on monkeys, now? I will definitely read this in a bit! Edit: Apparently they have been doing this for a while with monkeys now, among other animals that weren't named. Crazy to be honest.
Just hook your brain up to AI, look at the article, and then have AI reconstruct it from your vision and create a video summary.
lol, at some point maybe
There was a paper over a year ago doing this with humans too, but the predictions weren't anywhere near as close as this.
Yeah, seems that the AI 'knows' which region of the brain to look at for a mor accurate prediction. I love that they are using AI generated images to show the monkeys. I got a chuckle from that, idk why.
Yeah, gotta look this up. Ten years ago neuroscientists were saying they could only project some of the colors, and the pictures were inaccurate like 80% of the time. This would be a hugeee leap.
I mean I dont even believe this happened
The paper the article's based on: [https://www.biorxiv.org/content/10.1101/2024.06.04.596589v1](https://www.biorxiv.org/content/10.1101/2024.06.04.596589v1)
This is just neural experiments they've been doing for decades + AI image generation. It's all for clicks
seriously
yes
It's all fun and games til they're hooking people up to this without their consent!
Could you imagine being hospitalized for something and then the doctor walks in and says “OK Mr. such and such, we got some results back. You’re charts look normal, but let’s talk about that time you thought wood varnish smelt good”….
Or... "So we've brought you in for questioning in relation to X, we suspect you have committed this crime and this court warrant says we can interrogate your mind to find proof"
"The good news is that you didn't commit X. However, you're being charged with 200 counts of copyright infringement."
Lmao!! This is all becoming a bad Futurama plot haha.
Don't worry, we'll willingly sign the terms and conditions to our new Neuralinks when our employer requires us to wear them 24/7! (technically we'll have a choice, it'll be job or no job.)
I'm sure ChatGPT will tell me the dangers when I feed it the Ts and Cs, and I'm sure I'll decline them! /s just in case
Especially in pods. [It's messy](https://cdn.theasc.com/Matrix-669.jpg).
The popularizatin of Dystopian Science Fiction has really done a number on our society.
Vaccines? Imagine if they start injecting people with piss without their consent! That's how all doomers sound to me.
If they scan my brain all they will see is big booty Latinas
Same brother, same.
Article [https://www.newscientist.com/article/2438107-mind-reading-ai-recreates-what-youre-looking-at-with-amazing-accuracy/](https://www.newscientist.com/article/2438107-mind-reading-ai-recreates-what-youre-looking-at-with-amazing-accuracy/)
You guys really should be posting the article *as* the link instead of an image.
Maybe we can finally find out [what it's like to be a bat](https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F).
no we cant because the scientists in this experiment went with a huge bias into it, e.g. they knew exactly what result the wanted and constructed the experiment to deliver that result. As we don't know what bats see, we can't construct an experiment around the final result.
That’s not a link to the actual paper.
You're right, it's a link to the article he posted.
You know how some people were saying birds are stealth surveillance, and we were like, haha, what a crazy idea. 5G enabled neuralink + AI decoder = birds are CCTV now.
The source image is A.I generated, and the recreation is also A.I generated, from the same dataset. What's even the point.
US Government: https://i.redd.it/ef7c1rfsjjad1.gif
Chinese and Russian governments are more known to be into this kind of stuff lol
> Top row: what the monkey saw They were showing it AI generated images?
Terrifying but awesome, a dream recording device has been one of my dreams for a long time
Just what the CIA always wanted lol
Ok this needs some explanation
i would love to be able to record what goes through my visual cortex in realtime.
Wild
I want to record my dreams every night plz.
Could this eventually be used to record dreams?
I was thinking that… but also the reverse, ads coming to your brain 😔
Have you watched Nic Cage's Dream Scenario?
Nope. Any good ?
Your comment just reminded me of it as it's part of the plot.
Do you want Paprika? Because this is how you get Paprika Add this to the “Torment Nexus” pile lmao
Just add the possibility to send signals (remote control) and you have The Matrix's Agents.
Uh oh spaghettios
How often did they repeat the stimulus?
That monkey suffer from bad vision. What if this would be possible on human brains… https://www.science.org/content/article/ai-re-creates-what-people-see-reading-their-brain-scans [edit: article from 7 MAR 2023]
I don't understand, both rows look AI generated? Did they show the messed up AI images to the monkey?
Yes, this one's brain needs a fix, about a thousand of them.
will be like cyberdances in cyberpunk in next years ?
Can you share a link to the paper?
I am waiting for tech to also write not just read. Can you imagine? Like The Matrix learning method.
Waiting for a reproduction
I don't know how people can be so optimistic about the future with stuff like this. This is creepy/borderline evil. Between this ("mind reading" technology), transhumanism, gene editing, "longevity", AI, I see no reason to be optimistic about the future. The future is looking really dystopian already, and we've only just begun. The AI era started only 2 years ago.
study shows brain remembers everything but cannot recall it does this mean I can see my entire life I have survived till now as a movie in the later updates?
This type of thing is more possible than most people realize, often in science that isn't exactly making headlines, but this doesn't really make sense. I'd like to see what the data looks like before going through the AI.
So is this how the Matrix starts?
Aphantasia gang rise up
Recording one dreams to watch late next day
Oh boy! Manmade horrors beyond my comprehension!
This could be used then to uncover any pincode and password stored in someone's brain.
This is (or is similar to) what Neuralink does. They have them implanted in monkeys, have them do things like play pong with/without paddles, record and feed the data into ML models, and they can then (re)create the motions.
Am I the only one who finds it completely useless to use AI upscaling with this? It only allows us to see better comparisons (which may looks better for the general public who are impressionable) but for all of us familiar with AI images I think this use of generative AI seems like it makes things worse. You want the raw data with a human interpreting, not an AI upscaled that makes stuff up like a double headed fish and invents the background as it goes.
Congratulations on falling for a scam article, all those images are AI.
obvious bullshit
Looks like both rows are AI generated images. That chair is fucked up, the spider has nine legs in both. Text on the boat is fucked.
These companies are grossly negligent in overselling the abilities of these products at this stage for fundraising purposes. There’s a major hurdle of AI where the intelligence is able to educate itself without human clarification. Take dreams for example. Can you recall your dreams with as much detail as needed to train an AI so it can connect the dream to the recorded signals? You would have to have such clear, lucid recall that you would essentially be awake and talking in real time to tell it what you are seeing to get any meaningful analysis or video out of this. The true money play is to figure out how to transmit propaganda to a target population.
If the top row is what the monkey saw then what the fuck is that first chair
Don’t give them access to my dreams
ah, the old saying. monkey see, monkey brainwaves intercepted by mind reading virtual intelligence computer
also why did they show the monkey images are already ai generated asf
Paper link?
If it becomes really accurate, imo, there is no discussion anymore that there is no free will. If thoughts are matter that can be scanned we are not as unique as we thought. Completley deterministic.
Talk me through your thought process here. Why would you have thought that the function of the brain wasn’t material? Why does this answer the question of determinism in your thinking?
Imo anyone who is educated but thinks humans aren't as deterministic as everything else in the world needs to drop their ego and think about reality a bit through an objective lens. I think there's nothing intrinsically different about myself and the world around me when it comes to determinism. If I was to go and eat some funky mushrooms or drink a beer my thoughts change purely because those compounds affect the physical processes occurring in my brain which are what actually drive my behaviour. We're complex, but I've seen 0 evidence to suggest we have free will or are anything but slaves to physics, as the rest of the universe around us. Our behavior is determined by the structure of our brain and data which enters through our senses. Evolutionary forces have done a great job convincing us otherwise in order to drive us to making optimal decisions though :)
If there is no way to ever know all of the factors and influences that go into determining your behavior, does it matter?
Nonsense, and yes of course it's matters it's one of the most important discussions in psychology and biology.
The point is: because the world and our biology are too complex and chaotic to know all the details of the present state of the world and our brains, there is no way to predict our thoughts or behavior so the point of whether our behavior is deterministic is moot.
The point is: something being complex has never stopped it being discussed or researched, not sure why science would be afraid to push and discuss the frontiers of our knowledge. How complex or currently unknown something appears to be is moot. Do you realize what sub this is? Many people here believe that in the foreseeable future we'll have a singularity caused by ASI. These are the exact types of scientific knowledge which would become accessible in the case of a singularity occurring. I also believe we could seek to use brain organoids to prove the deterministic nature of our brains.
Apologies! I was not trying to say the issue shouldn't be discussed. I was trying to suggest that the world and our brains are much too complex to reduce our behavior to the simple binary, either we have free will or we don't. In the show Westworld, when asked if they are real, the robot answers, "If you can't tell, does it matter?". I thought this idea had some resonance with the idea that even if our behavior is deterministic yet based on conditions that are too complex to ever fully know or predict, does it matter? Maybe it was a stretch. Anyway, you might find this interesting: http://www.robertsapolskyrocks.com/chaos-and-reductionism.html
Oh yeah for sure, my opinion that the cognitive processes which control my life are fully deterministic does not mean I chose to live it any differently in practice - I have a purely ecocentric pov. I also don't think we'll be fully predicting these processes within my lifetime unless we see a hard takeoff scenario.
While there has been some work on scanning thoughts. This particular work has nothing to do with thoughts but with reconstructing visual processing.
They can reconstruct "formal" thoughts (when you are actively thinking in words and language) with the same approach. It's called semantic decoding.
I'm familiar. Of course, I'm not sure even that affects free will, for it to do so presumes there is nothing under the hood so to speak that underpins the thought.
Yeah I don't think this has much bearing on the free will question either, and I thought only to offer a complementary addendum. Cheers! :)
What made you come to that conclusion. Theres a lot you just jumped to. That doesnt mean its deterministic
Thoughts are not matter, they’re electrical impulses. And the fact that they can be measured says nothing of whether or not they are produced deterministically.
The fact that this technology itself is stochastic sort of goes against what you’re claiming. The universe is essentially simulating everything in existence and it seems to use whatever’s already there to create the next state. Now the determinism argument suggests that each state is wholly knowable - at least theoretically - and thus leads only to one possible next state. But what we know of reality tells us that each state - even each little part of each state - is truly infinite. It is impossible to know infinity and an infinite state can’t be used to construct another infinite state deterministically. So what happens instead? Probability enters the picture. Full clarity is an illusion but an orderly fuzziness is possible. So that’s what happens. Tldr: the universe is an infinite probabilistic simulation of every possible state of existence
Serious Question: What do we do with AI mind reading technology? What are the applications? I'm sure there are some outside of fringe medical use cases, but can't readily see a general use application for something to tell you what you're looking at.
Increasing understanding is a goal by itself
Watching back dreams will provide insights about the brain
This is an interesting take. I could definitely see therapists in a sleep clinic watching a patient's dreams in real time. That could truly provide some insight.
Misleading. Like most AI articles.
could you elaborate for us laymen..
It's done on monkeys, also it's old news
I remember this being done a decade ago. Also not very impressive when these images were already in the dataset.
Oh but they upscaled them with AI! So it looks closer to the target image! STONKS when the clueless investors see this!
With these tools they can read the peoples mind or if someone died then just copy the brain over to usb stick.
OP you straight up did not read the article or something because both of those rocking chairs are super fucked up and both ai. If you’re going to post slop on the sub at least get your shit straight