{"id":1050,"date":"2023-05-27T03:42:11","date_gmt":"2023-05-27T03:42:11","guid":{"rendered":"http:\/\/thisbiginfluence.com\/?p=1050"},"modified":"2023-05-27T03:42:11","modified_gmt":"2023-05-27T03:42:11","slug":"generative-ai-reconstructs-videos-people-are-watching-by-reading-their-brain-activity","status":"publish","type":"post","link":"https:\/\/thisbiginfluence.com\/?p=1050","title":{"rendered":"Generative AI Reconstructs Videos People Are Watching by Reading Their Brain Activity"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p class=\"western\" align=\"left\"><span lang=\"en-US\">The abili<\/span><span lang=\"en-US\">ty of machines to <\/span><span lang=\"en-US\"><a href=\"https:\/\/singularityhub.com\/2023\/05\/02\/this-brain-activity-decoder-translates-ideas-into-text-using-only-scans\/\">read our minds<\/a> has been steadily progressing in recent times. Now, researchers have used AI video technology expertise to present us a window into the thoughts\u2019s eye.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">The principle driver behind makes an attempt to interpret mind alerts is the hope that in the future we&#8217;d be capable to provide new home windows of communication for these in comas or with varied types of paralysis. However there are additionally hopes that the expertise might create extra intuitive interfaces between people and machines that might even have purposes for wholesome folks.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">To this point, most analysis has centered on efforts to recreate the interior monologue<\/span><span lang=\"en-US\">s<\/span><span lang=\"en-US\"> of sufferers, utilizing AI methods <a href=\"https:\/\/singularityhub.com\/2022\/09\/13\/meta-built-an-ai-that-can-guess-the-words-youre-hearing-by-decoding-your-brainwaves\/\">to pick out<\/a> what phrases they&#8217;re pondering of. Probably the most promising outcomes have additionally come from invasive mind implants which might be unlikely to be a sensible strategy for most individuals.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">Now although, researchers from the Nationwide College of Singapore and the Chinese language College of Hong Kong have proven that they will mix non-invasive mind scans and AI picture technology expertise to create brief snippets of video which might be uncannily just like clips that the themes have been watching when their mind information was collected.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">The work is an extension of analysis the identical authors <\/span><span lang=\"zxx\"><a href=\"https:\/\/arxiv.org\/abs\/2211.06956\"><span lang=\"en-US\">published late last year<\/span><\/a><\/span><span lang=\"en-US\">, the place they confirmed they may generate nonetheless photographs that roughly matched the images topics had been proven. This was achieved by first coaching one mannequin on giant quantities of knowledge collected utilizing fMRI mind scanners. This mannequin was then mixed with the open-source picture technology AI Steady Diffusion to create the images.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">In a brand new paper <\/span><a href=\"https:\/\/arxiv.org\/abs\/2305.11675\"><span lang=\"zxx\"><span lang=\"en-US\">published on the <\/span><\/span><span lang=\"zxx\"><span lang=\"en-US\">preprint server <\/span><\/span><em><span lang=\"zxx\"><span lang=\"en-US\">arXiv<\/span><\/span><\/em><\/a><span lang=\"en-US\"><em>,<\/em> the authors take an identical strategy, however adapt it in order that the system can interpret streams of mind information and convert them into movies somewhat than stills. First, they educated one mannequin on giant quantities of fMRI in order that it might be taught the final options of those mind scans. This was then augmented so it might course of a succession of fMRI scans somewhat than particular person ones, after which educated once more on mixtures of fMRI scans, the video snippets that elicited that mind exercise, and textual content descriptions.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">Individually, the researchers tailored the pre-trained Steady Diffusion mannequin to provide video somewhat than nonetheless photographs. It was then educated once more on the identical movies and textual content descriptions that the primary mannequin had been educated on. Lastly, the 2 fashions have been mixed and fine-tuned collectively on fMRI scans and their related movies.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">The ensuing system was capable of take contemporary fMRI scans it hadn\u2019t seen earlier than and generate movies that broadly resembled the clips human topics ha<\/span><span lang=\"en-US\">d<\/span><span lang=\"en-US\"> been watching on the time. Whereas removed from an ideal match, the AI\u2019s output was typically fairly near the unique video, precisely recreating crowd scenes or herds of horses and infrequently matching the colour palette.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">To guage their system, the researchers used a video classifier designed to evaluate how properly the mannequin had understood the semantics of the scene\u2014as an illustration, whether or not it had realized the video was of fish swimming in an aquarium or a household strolling down a path\u2014even when the imagery was barely completely different. Their mannequin scored 85 %, which is a forty five % enchancment over the state-of-the-art.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">Whereas the movies the AI generates are nonetheless glitchy, the authors say this line of analysis might finally have purposes in each primary neuroscience and in addition future <a href=\"https:\/\/singularityhub.com\/tag\/brain-computer-interface\/\">brain-machine interfaces<\/a>. Nonetheless, in addition they acknowledge potential downsides to the expertise. \u201cGovernmental laws and efforts from analysis communities are required to make sure the privateness of 1\u2019s organic information and keep away from any malicious utilization of this expertise,\u201d they write.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">That&#8217;s doubtless a nod to considerations that the mixture of AI mind scanning expertise might make it doable for folks to intrusively document different\u2019s ideas with out their consent. <\/span><span lang=\"en-US\">A<\/span><span lang=\"en-US\">nxieties have been <\/span><span lang=\"en-US\">additionally <\/span><span lang=\"en-US\">voiced earlier this 12 months when researchers used an identical strategy to basically create a tough <\/span><span lang=\"zxx\"><a href=\"https:\/\/www.scientificamerican.com\/article\/a-brain-scanner-combined-with-an-ai-language-model-can-provide-a-glimpse-into-your-thoughts\/\"><span lang=\"en-US\">transcript of the voice inside peoples\u2019 heads<\/span><\/a><\/span><span lang=\"en-US\">, although specialists have identified that this may be <\/span><span lang=\"zxx\"><a href=\"https:\/\/www.nature.com\/articles\/d41586-023-01486-z\"><span lang=\"en-US\">impractical if not impossible<\/span><\/a><\/span><span lang=\"en-US\"> for the foreseeable future.<\/span><\/p>\n<p align=\"left\"><span lang=\"en-US\">However whether or not you see it as a creepy invasion of your privateness or an thrilling new technique to interface with expertise, it appears machine thoughts readers are edging nearer to actuality.<\/span><\/p>\n<p align=\"left\"><em>Picture Credit score: <a href=\"https:\/\/pixabay.com\/users\/claudid-2222736\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=5431597\">Claudia Dewald<\/a> from <a href=\"https:\/\/pixabay.com\/\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=5431597\">Pixabay<\/a><\/em><\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/singularityhub.com\/2023\/05\/26\/an-ai-recreated-videos-people-watched-based-on-their-brain-activity\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The ability of machines to read our minds has been steadily progressing in recent times. Now, researchers have used AI video technology expertise to present us a window into the thoughts\u2019s eye. The principle driver behind makes an attempt to interpret mind alerts is the hope that in the future we&#8217;d be capable to provide [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1052,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[1485,1484,139,525,1483,1480,1481,1482],"class_list":["post-1050","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech","tag-activity","tag-brain","tag-generative","tag-people","tag-reading","tag-reconstructs","tag-videos","tag-watching"],"_links":{"self":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/posts\/1050","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1050"}],"version-history":[{"count":0,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/posts\/1050\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/media\/1052"}],"wp:attachment":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1050"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1050"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1050"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}