Was going to say the same thing. Presenting stereo pairs has a lot of layout and resolution issues to say nothing of the fact that some people have stereo blindness to varying degrees (lazy eye is an extreme case).
The author is correct that stereo depth can greatly enhance an image, but a wigglegram does this at full resolution with no visual puzzle solving.
Thank you! I have not, to this day, have been able to see any Magic Eye/Cross Eyed or similar images. A wigglegram is immediately trivial.
I'd probably most appreciate a layout with one frame on the left, the wigglegram in the middle, and the other frame on the right, so that I can get a sense of both the distances and the detail.
This is very common in structural biology papers, where you need to make figure of complex 3D arrangements of atoms, but the figures must be printed in 2D. Typically using molecular modeling software, you find your view of choice. Then you rotate +- 0.5° and render two images, and put those side by side as a stereo pair figure.
It takes quite a bit of practice to see them well:
I did this on a school project back in the 90's, with a structure of quinol clathrate that was completely wrong but very pretty. I was very into povray at the time. My chemistry teacher didn't quite know what to make of it...
technically rotation is going to end up giving a slightly different result than lateral displacement, right? but it's very similar for small distances.
Since the advent of models like Depth Anything, you can now convert 2D images into this effect using them plus a bit of creative processing. Here's a non-technical overview that plugs some software and talks about the underlying models: https://www.owl3d.com/blog/2d-to-stereoscopic-3d-with-ai-dep...
If you are able to cross two images for the 3D effect you can also do it to spot differences like a savant in “spot the differences” games.
Give it a try: https://spotthedifference.games/
A great source for stereo pairs is NOAA's aerial imagery data, consisting of various snapshots along an airplane's trajectory. For example here is a stereo pair of Desecheo Island:
I can generally see the Magic Eye pictures very well.. these are way harder.
The tiny thumbnails at the bottom of the page work, but the larger images I can't cross my eyes enough.
I think it depends greatly on getting the screen/image size just the right size and also getting the viewing distance right. On large monitors it seems harder to see.
> You can do this by holding your finger substantially in front of the image, and focusing solely on the finger with your eyes, while turning your mind’s attention to the image behind it while keeping your eyes still.
This tip in the article helped me a lot, it's much easier to cross your eyes further with something to actually focus on
It's helpful if you can smoothly zoom in on the images. Start zoomed out far enough that you can easily see the effect, and then slowly enlarge the images. Your brain will work to keep them in focus.
I love this effect. I had a book of Magic Eye pictures as a kid, which was a similar effect.
I'm not sure how practical the "crossing your eyes to get 3D" thing actually is, it makes my eyes water after a minute or so, but it's still sort of cool to see my cheapy monitor doing 3D without any special glasses.
I had the same book - the one with the black border? :) There was a dolphin or something, right? So long ago, can barely remember...
The water eye thing only happens to me if I cross my eyes and focus before the picture and not behind the picture. The latter takes some time to let the eyes relax, but its much more natural.
If you stitch the photos together seamlessly, you can display them on a VR headset in a really natural way. I take stereo images in landscape mode, stitch them together top/bottom, and then enjoy them on my Quest 2 using the Pigasus media player.
If you use a 180° fisheye lens, you can immerse yourself in the scene. just make sure to keep the camera perfectly level, or you'll end up making yourself sick if you try to view the images unadjusted.
I only learned somewhat recently that a lot of (or all?) Magic Eyes are meant to be parallel viewed instead of cross eyed. The difference was pretty impressive the first time I saw one correctly.
I've done this with my SLR. Moving the camera different amounts can give a more pronounced effect, however it can be more difficult to get the image to converge.
I had a lot of fun with doing this 20 years ago. Sadly, my visual acuity has become significantly different between my eyes (even w/ correction) and the enjoyment of 3D displays has really diminished as as result.
Just musing because I'm working and busy:
I wonder how difficult it would be to do video. (Obviously you'd have to shoot two videos in parallel versus just moving the camera and shooting again.)
Converting existing 3D videos to a cross-eyed viewing format would probably be the easiest way to experiment with it. I wonder if anybody has done that. I've never looked at 3D movie formats before. I always assumed it was two interleaved streams.
I remember seeing video done this way on youtube during the 3D tv craze about 10 years ago (not the 3d stuff that youtube supported, this was just people messing about with 2 images side by side). It worked about as well as the examples here, but was not a particularly comfortable experience for anything longer than a few minutes.
One approach is time-shifted side-window video. Time-offset frame pairs from mono video that's pointed perpendicular to camera motion. Pretending that frames offset in space, time, and orientation, are offset only in horizontal-space-perpendicular-to-shot. Sufficiently perpendicular that difference in apparent size within pairs doesn't bite, and similarly for motion stability, orientation, distance from subjects, and non-horizontal motion. Phone video out the side window of a train/plane/bus/car with a view, or a drone footage segment that meets constraints. River seen from bridge-crossing train window; waiting rocket seen from circling drone. Upside is easy capture, and an inter-ocular distance that's potentially quite large (miles) and adjustable in post processing. Downsides include non-static things don't fuse (traffic by that river, fog off the rocket), and it can be a pain to find usable segments in existing video.
With AI generating depth maps from mono images, and filling in image gaps, it may someday be possible to generate stereo from some mono video. One challenge is visual artifacts involving depth can be very noticeable.
I remember reading about a physical device someone constructed with mirrors (I think at an early burning man) that gave you the experience of a huge inter ocular distance to get a giant's eye view. I've always wanted to try one (or the opposite to experience a tiny inter ocular distance).
A pair of periscopes laid flat will do it, but it'll really confuse your eyes. I can feel it on these images: the eye muscles are trying to converge on a distance closer than the focusing muscles want to focus on, and I can tell that's a bit weird. That might be an age thing though.
The reason you need to change the inter-frame distance is because the amount of information carried by the parallax drops off quite quickly, to the extent that at the sort of distance in the tree photo, your brain is mostly using motion cues for 3d reconstruction, not stereo vision. Increasing the horizontal distance simulates bringing everything correspondingly closer.
To simplify capture, I've picked up a couple of digital stereo cameras (Fujifilm FinePix REAL 3D is a good one). The image quality is so-so and they're fairly affordable still on eBay (maybe $200 or so).
Last summer on a road trip to Alaska, it was almost my exclusive camera for the trip. When I got back I wrote an app to take the MPO files it contains and turn them into a printable parallel-view image. The side-by-side images are intended to be printed and used in an old-fashioned stereoscope.
3d cameras though won't let you experiment with the distance between images. It'll give you a consistent look, similar to what our eyes see, but what's cool about using one camera and moving it is you can exaggerate the distance to show depth much farther away.
I have always had trouble with magic-eye pictures - I am told my eyes are quite different shapes. I can see stereograms with some effort.
I believe that there is a small percentage of the population for whom stereoscopic images (including 3d films) just don't work at all. Either they lack the ability to perceive depth directly or their brains aren't fooled by images with no parallax relative to their eye movements. I don't have any cites for this though.
It’s not the same. They are slightly different and depending on whether you cross-eye or parallel view them, both texts will slightly move away or towards you in 3D.
Presumably that would only work for the wall-eyed (diverging) images given at the bottom of the screen, the cross-eyed (converging) images given throughout most of the article wouldn't work with this technique.
It took me a while too. Especially those double images that look totally contorted and only come out when you relax your focus. It took ages but suddenly they popped and now i can do them every time. Once they came through they were crystal clear.
The trick is to focus normally on the image, then move your focus to be a few inches behind your screen, as if you're looking through it at something behind.
The trick for me was to completely cross my eyes and produce the double images, then slowly uncross/cross them until I could start to see the image, which eventually clears up.
The video game Magic Carpet had a couple of 3D modes, anaglyph 3d requiring blue/red classes, and stereogram mode [1]. The latter was not really usable, but it was a cool trick, expecially for the time ('94).
I’ve been able to view these type of pictures forever. But I’ll credit the article with today being the first time I’ve actually taken them myself, put them side by side in my notes app, and been pretty impressed with how simple it was to get a neat effect.
Often underappreciated detail when it comes to stereoscopic 3D is that the image produced is not actually fully 3D still.
Even with just a single eye, it is possible to see depth: this would be via monocular depth cues, such as depth of field.
If you ever wondered why 3D visuals, no matter how technically advanced, never quite felt right, this is likely the culprit. When your eyes adapt to the stereo 3D cues, the monocular cues are lost and vice versa. There are some VR technologies that do hope to achieve both at the same time using eye (and iris?) tracking, but I haven't been following the topic for a while. It's essentially a quest for a lightfield display.
The monocular depth cues are not lost; rather, the main issue with VR and stereoscopic images is the forced depth focus. This creates a mismatch between vergence and accommodation, which can interfere with natural depth perception. There is extensive research on this topic, particularly regarding the vergence-accommodation conflict:
They do? Well not exactly, but even with the multiple iphone cameras, you can do "spatial capture" (https://support.apple.com/guide/apple-vision-pro/capture-dev...), and see it on a Vision Pro. I don't know exactly how it works, presumably somewhat with classic stereo photography from the slightly displaced cameras (wide and ultrawide), and some more complicated computer vision tricks.
But yeah just having the cameras on either side would be great, although you couldn't play around with the distance as easily.
Hm I wonder if instead of the displacement, it’s using the difference in the standard and ultrawide field of view and doing some sort of interpolation of them? Maybe even pulling data from the lidar? The fake depth of field blur/bokeh uses some of this I guess.
I’d love to look at these pictures but have never succeeded. In particular, I find crossing my eyes to be a rather painful ( as in, it physically hurts ) experience so stop immediately if I try.
I love that this comes up every so often on hacker news. I used love Magic Eye as a kid, and have been taking stereo photos on and off ever since. Experimenting with how to take them (moving camera, from a plane, etc), and how to view them (cross view, and putting them into Meta Quest).
My favourite related trick is to use cross viewing to solve find-the-difference puzzles.
Set up so the two images (with slight differences) are next to each other, cross your eyes, and look for 'shimmering' spots - these are the differences between the images.
It makes differences very easy to spot, which is pretty cool!
Those who fail to cross eye/parallel eye these images can try looking at these through a (DIY) binocular (empty pipes/used kitchen roll should work the same).
It will only work with parallel eye images though (at the end of this article).
most stereograms are designed to look correct when you cross your eyes
This is how I look at stereograms (looking nearer than the page), but at least some of the images on this page seem like they're designed for the other way around (looking in the distance, beyond the page).
This one looks weird when I look at it cross eyed, but fine when I look at the other way.
january 1983 issue of Creative Computing had an article on stereo vision, and I build the stereoscope in it, and wrote basic for my IBM PC to do this. Awesome!
They're not far enough apart. Stereo vision on humans is best inside 3-5 metres, usually, and by 10 metres out is not that great.
That's with pupils 65mm apart, give or take. Now scale that down to the horizontal distance between lenses on a phone. Coincidentally on mine they are about 13mm apart. You just multiply everything down linearly: my camera has the same experience at 60mm distance from an object as I do at 3m. It would be pretty useless past 2m, but also with those constraints you'd notice the quality drop off with range within a single object that was approaching 50cm depth.
Now that's OK itself, you could get some useful work done with that, but there are so many provisos to it that it would be very hard to sell as a feature. To top that off it's algorithmically painful. There's a reason the commercial 3D scanners don't (typically, only) use stereo pairs; there's almost always a better way to do it.
(oh, and to get out ahead of questions: Spatial Video capture on the iphone 15 apparently uses the LIDAR sensor for the depth map, not just the cameras)
Yes, the spatial videos that iPhones can record are just stereoscopic video. This hasn't been cloned by other manufacturers because there have been very few viewing devices that they own, but that might change with Android XR.
I found these all very easy to see on my phone screen. By the end of the article I was already able to instantly "go 3d" without thinking "go cross eyed" or use tricks like looking at my nose.
The problem I think most people will have is screen size. If the screen is too big then going cross eyed will be harder and might cause strain. Straight eyed can be easier, but I still think there's a limit on screen size. Magic eyes when printed are about the right size for people's heads so just work.
The second problem is the pictures look half as big as soon as you "go 3d". So although it's very cool and I buy the author's point that some pictures need the depth to make sense, there's always something lost from the full size picture. On my phone it made it like looking at a postage stamp!
Still I might try it myself given how easy it sounds. I imagine even a slight bit of wind would make some subjects impossible, though!
The first set are (as the text says) designed for cross-eyed viewing, the others for parallel. These use opposite left/right arrangement. So if it feels like half of them are backwards, you are maybe using the same viewing method for both sets?
I just went back and doubled checked, and they're all correct. If they look inverted, you are probably diverging your eyes ("parallel view") instead of crossing them. If that's more comfortable for you there is a section of images arranged with that orientation at the end of the article.
Came to say the same. Depth is inverted for me. Is it because instead of looking at each picture with the eyes straight, we cross them (and each sees the opposite picture)?
Heh I thought you must've tested these on a teeny tiny monitor to make them work. No wonder! But of course when crossing eyes there is essentially no limit. Thanks for the clarification.
I prefer wigglegrams. If you're looking for an example - Wikipedia page has one from 1927[1]!
[1]: https://en.wikipedia.org/wiki/Wiggle_stereoscopy
Was going to say the same thing. Presenting stereo pairs has a lot of layout and resolution issues to say nothing of the fact that some people have stereo blindness to varying degrees (lazy eye is an extreme case). The author is correct that stereo depth can greatly enhance an image, but a wigglegram does this at full resolution with no visual puzzle solving.
WHAT... this is the first time in my life I have been able to see 3D with one eye!
Closing one eye and I still see the depth effect.
Main brain works very differently than I assumed...
With this effect I guess one could make even people who are blind on one eye see in 3D
Thank you! I have not, to this day, have been able to see any Magic Eye/Cross Eyed or similar images. A wigglegram is immediately trivial.
I'd probably most appreciate a layout with one frame on the left, the wigglegram in the middle, and the other frame on the right, so that I can get a sense of both the distances and the detail.
I'm slightly suspicious that this comment was written by a bird.
This is very common in structural biology papers, where you need to make figure of complex 3D arrangements of atoms, but the figures must be printed in 2D. Typically using molecular modeling software, you find your view of choice. Then you rotate +- 0.5° and render two images, and put those side by side as a stereo pair figure.
It takes quite a bit of practice to see them well:
https://spdbv.unil.ch/TheMolecularLevel/0Help/StereoView.htm...
Thanks for that! It is quite impressive and I learned to see the 3D molecule on first try.
I remember seeing something similar, but made of an array of dots (you did not know what was behind the doors until you see it by crossing your eyes)
I did this on a school project back in the 90's, with a structure of quinol clathrate that was completely wrong but very pretty. I was very into povray at the time. My chemistry teacher didn't quite know what to make of it...
technically rotation is going to end up giving a slightly different result than lateral displacement, right? but it's very similar for small distances.
Since the advent of models like Depth Anything, you can now convert 2D images into this effect using them plus a bit of creative processing. Here's a non-technical overview that plugs some software and talks about the underlying models: https://www.owl3d.com/blog/2d-to-stereoscopic-3d-with-ai-dep...
Bonus, I also found this real-time 3D-ifier for your screen: https://github.com/zjkhurry/stereopsis-anything
I had to, hopefully you don't mind moultano!
Same content, but all lined up and rendered the whole article in cross-view.
You can now read the article and see the pictures while in cross-view.
https://jasonjmcghee.github.io/you-should-make-cross-views-3...
This is great! I can only imagine how strange it would look to observe someone reading an article this way, but it's a great idea!
Now just need to duplicate the mouse cursor to be in 3D too!
I also had to shrink the window as I couldn't manage the cross view on a widescreen :)
Yeah I didn't know how big to make the text so I figured people could just zoom out as needed lol
Clever!
If you are able to cross two images for the 3D effect you can also do it to spot differences like a savant in “spot the differences” games. Give it a try: https://spotthedifference.games/
You’re welcome.
It's interesting how the difference appears, a strange artifact. Very fun to feel like a pro clicking through the differences quickly
Very relevant: https://news.ycombinator.com/item?id=42655870
My mind is blown right now. This is so cool.
When they wrote "your screen can display 3D photos", I thought it would be a hardware hack and not something that depends on a human physiology hack.
Something like stereoscopic GIFs come to mind, e.g. https://tenor.com/fr-CA/view/dain-stereoscopic-daingifs-3d-m...
In other words, taking the two images and swapping them quickly creates the illusion of depth.
Edit:
Looking into it, there's a word for it. Wiggle stereoscopy: https://en.wikipedia.org/wiki/Wiggle_stereoscopy
There's a whole bunch of these over at https://old.reddit.com/r/wigglegrams/ if you want more
A great source for stereo pairs is NOAA's aerial imagery data, consisting of various snapshots along an airplane's trajectory. For example here is a stereo pair of Desecheo Island:
https://cdn.coastalscience.noaa.gov/datasets/aerialphotodb/u...
https://cdn.coastalscience.noaa.gov/datasets/aerialphotodb/u...
EDIT it can be tedious to discover such pairs. If only there were a tool...
I can generally see the Magic Eye pictures very well.. these are way harder.
The tiny thumbnails at the bottom of the page work, but the larger images I can't cross my eyes enough.
I think it depends greatly on getting the screen/image size just the right size and also getting the viewing distance right. On large monitors it seems harder to see.
> You can do this by holding your finger substantially in front of the image, and focusing solely on the finger with your eyes, while turning your mind’s attention to the image behind it while keeping your eyes still.
This tip in the article helped me a lot, it's much easier to cross your eyes further with something to actually focus on
It's helpful if you can smoothly zoom in on the images. Start zoomed out far enough that you can easily see the effect, and then slowly enlarge the images. Your brain will work to keep them in focus.
I believe the author switched left and right. Because the inverse ones at the bottom work fine
Magic eye pictures are viewed by diverging your eyes, so the "parallel view" versions at the bottom work correctly with that method.
"Cross view" pictures require converging your eyes, so the images have to be in the opposite position from what your eye would see.
You're right.. guess I always thought Magic Eye was converged.
Once I look at the ones at the bottom and just zoom in/out in the browser I can see them perfectly.
I have a 33" monitor and seeing them means sitting much closer than normal and then zooming out in the browser.
Nah, magic eye is crossing, at least the ones I know. And I can only do crossing
The best way I came across doing this is to try and do a "thousand yard stare" while looking at the image. It's super reliable.
Zoom out on the page a few times. Try 50%
Yes, making them smaller certainly helps.
I love this effect. I had a book of Magic Eye pictures as a kid, which was a similar effect.
I'm not sure how practical the "crossing your eyes to get 3D" thing actually is, it makes my eyes water after a minute or so, but it's still sort of cool to see my cheapy monitor doing 3D without any special glasses.
I had the same book - the one with the black border? :) There was a dolphin or something, right? So long ago, can barely remember...
The water eye thing only happens to me if I cross my eyes and focus before the picture and not behind the picture. The latter takes some time to let the eyes relax, but its much more natural.
If you stitch the photos together seamlessly, you can display them on a VR headset in a really natural way. I take stereo images in landscape mode, stitch them together top/bottom, and then enjoy them on my Quest 2 using the Pigasus media player.
If you use a 180° fisheye lens, you can immerse yourself in the scene. just make sure to keep the camera perfectly level, or you'll end up making yourself sick if you try to view the images unadjusted.
I only learned somewhat recently that a lot of (or all?) Magic Eyes are meant to be parallel viewed instead of cross eyed. The difference was pretty impressive the first time I saw one correctly.
I seem to recall that Magic Eye pictures had you decross your eyes, so you were looking past the page. It was a bit harder to do.
eh, you get used to it. I've watched whole movies like this
"Wow, a sailboat!"
I've done this with my SLR. Moving the camera different amounts can give a more pronounced effect, however it can be more difficult to get the image to converge.
I had a lot of fun with doing this 20 years ago. Sadly, my visual acuity has become significantly different between my eyes (even w/ correction) and the enjoyment of 3D displays has really diminished as as result.
Just musing because I'm working and busy:
I wonder how difficult it would be to do video. (Obviously you'd have to shoot two videos in parallel versus just moving the camera and shooting again.)
Converting existing 3D videos to a cross-eyed viewing format would probably be the easiest way to experiment with it. I wonder if anybody has done that. I've never looked at 3D movie formats before. I always assumed it was two interleaved streams.
I remember seeing video done this way on youtube during the 3D tv craze about 10 years ago (not the 3d stuff that youtube supported, this was just people messing about with 2 images side by side). It worked about as well as the examples here, but was not a particularly comfortable experience for anything longer than a few minutes.
For that matter, there are a couple of "Magic Eye" videos out there, which look like TV static until you cross your eyes enough.
> I wonder how difficult it would be to do video
One approach is time-shifted side-window video. Time-offset frame pairs from mono video that's pointed perpendicular to camera motion. Pretending that frames offset in space, time, and orientation, are offset only in horizontal-space-perpendicular-to-shot. Sufficiently perpendicular that difference in apparent size within pairs doesn't bite, and similarly for motion stability, orientation, distance from subjects, and non-horizontal motion. Phone video out the side window of a train/plane/bus/car with a view, or a drone footage segment that meets constraints. River seen from bridge-crossing train window; waiting rocket seen from circling drone. Upside is easy capture, and an inter-ocular distance that's potentially quite large (miles) and adjustable in post processing. Downsides include non-static things don't fuse (traffic by that river, fog off the rocket), and it can be a pain to find usable segments in existing video.
With AI generating depth maps from mono images, and filling in image gaps, it may someday be possible to generate stereo from some mono video. One challenge is visual artifacts involving depth can be very noticeable.
You should move it by the distance of your pupils for the best effect. Mine is bog standard 63mm so I'm lucky.
If you get it wrong things will look too big or too small and the 3D effect will be softer or too pronounced.
The point is that you can force the respective of a giant or an ant if you so choose.
Using two cameras I had a time lapse of a fog coming in where you could see the 3d structure becuase they were at two different streets looking out.
Like op this was done nearly 20 years ago. Now you don't even get fog any more.
I remember reading about a physical device someone constructed with mirrors (I think at an early burning man) that gave you the experience of a huge inter ocular distance to get a giant's eye view. I've always wanted to try one (or the opposite to experience a tiny inter ocular distance).
Found it http://eyestilts.com/intro.html
A pair of periscopes laid flat will do it, but it'll really confuse your eyes. I can feel it on these images: the eye muscles are trying to converge on a distance closer than the focusing muscles want to focus on, and I can tell that's a bit weird. That might be an age thing though.
You get the same thing with VR headsets as their physical focus is always around 1 metre away (3-4 feet). But you get used to it quickly.
The reason you need to change the inter-frame distance is because the amount of information carried by the parallax drops off quite quickly, to the extent that at the sort of distance in the tree photo, your brain is mostly using motion cues for 3d reconstruction, not stereo vision. Increasing the horizontal distance simulates bringing everything correspondingly closer.
Yes but that makes it look like a miniature world.
To simplify capture, I've picked up a couple of digital stereo cameras (Fujifilm FinePix REAL 3D is a good one). The image quality is so-so and they're fairly affordable still on eBay (maybe $200 or so).
Last summer on a road trip to Alaska, it was almost my exclusive camera for the trip. When I got back I wrote an app to take the MPO files it contains and turn them into a printable parallel-view image. The side-by-side images are intended to be printed and used in an old-fashioned stereoscope.
https://github.com/EngineersNeedArt/Stereographer
3d cameras though won't let you experiment with the distance between images. It'll give you a consistent look, similar to what our eyes see, but what's cool about using one camera and moving it is you can exaggerate the distance to show depth much farther away.
Ugh, there's some people out there who cannot see these. Not for a lack of trying, I've personally been trying since the 90s.
I have always had trouble with magic-eye pictures - I am told my eyes are quite different shapes. I can see stereograms with some effort.
I believe that there is a small percentage of the population for whom stereoscopic images (including 3d films) just don't work at all. Either they lack the ability to perceive depth directly or their brains aren't fooled by images with no parallax relative to their eye movements. I don't have any cites for this though.
There are two communities, you need to find out which one you belong to.
https://www.reddit.com/r/CrossView
https://www.reddit.com/r/ParallelView
There is a test image you can try:
https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fg...
Whichever "view" looks to be closer is the one you should use.
I don't get it... the reddit image is the same on both sides. Trying to look at it cross-eyed just gives me a headache.
"Foreground text describes viewing type" - there is no foreground, it's just text on a page?
It’s not the same. They are slightly different and depending on whether you cross-eye or parallel view them, both texts will slightly move away or towards you in 3D.
My eyes must be broken, I don't see that at all.
I don't think I understand what the image is supposed to do. I can only see from one eye at a time.
Have you tried using pipes/kitchen towel rolls?
As in, try doing something like this:
1. Zoom the images small enough to be almost parallel to your eyes.
2. Make a binocular out of used kitchen rolls.
3. Each side should look at exactly one of the images
It should just work. Both images should converge like they do in a binocular.
(you can then try removing this DIY binocular suddenly and see if you can maintain focus)
Presumably that would only work for the wall-eyed (diverging) images given at the bottom of the screen, the cross-eyed (converging) images given throughout most of the article wouldn't work with this technique.
It took me a while too. Especially those double images that look totally contorted and only come out when you relax your focus. It took ages but suddenly they popped and now i can do them every time. Once they came through they were crystal clear.
I am with you. I could also never see the hidden image things. Been trying for 40 years.
I made a stereogram a few years back that turned out to be unusually easy for people to view. Perhaps you'll have luck with it:
https://www.damninteresting.com/temp/damn-interesting-stereo...
The trick is to focus normally on the image, then move your focus to be a few inches behind your screen, as if you're looking through it at something behind.
What should I see?
A triangle with an exclamation mark inside.
The trick for me was to completely cross my eyes and produce the double images, then slowly uncross/cross them until I could start to see the image, which eventually clears up.
Do you manage to align the two small white squares above each picture so they are on top of each other?
Yes.
Amblyopia and monocular diplopia here. No way I can see these, my brain won't let me.
The video game Magic Carpet had a couple of 3D modes, anaglyph 3d requiring blue/red classes, and stereogram mode [1]. The latter was not really usable, but it was a cool trick, expecially for the time ('94).
[1] https://youtu.be/iZT-S2F191I?si=8k9jniqA98wgq0Hu&t=1090
i once found a renderer for quake 2 that rendered in stereograms. Managed to make it through the first level
I’ve been able to view these type of pictures forever. But I’ll credit the article with today being the first time I’ve actually taken them myself, put them side by side in my notes app, and been pretty impressed with how simple it was to get a neat effect.
Great! I wrote this in the hopes that more people would start making them, because I love them.
Often underappreciated detail when it comes to stereoscopic 3D is that the image produced is not actually fully 3D still.
Even with just a single eye, it is possible to see depth: this would be via monocular depth cues, such as depth of field.
If you ever wondered why 3D visuals, no matter how technically advanced, never quite felt right, this is likely the culprit. When your eyes adapt to the stereo 3D cues, the monocular cues are lost and vice versa. There are some VR technologies that do hope to achieve both at the same time using eye (and iris?) tracking, but I haven't been following the topic for a while. It's essentially a quest for a lightfield display.
The monocular depth cues are not lost; rather, the main issue with VR and stereoscopic images is the forced depth focus. This creates a mismatch between vergence and accommodation, which can interfere with natural depth perception. There is extensive research on this topic, particularly regarding the vergence-accommodation conflict:
https://en.wikipedia.org/wiki/Vergence-accommodation_conflic...
Yes, that's the effect I'm referring to. Maybe my wording was too sloppy.
Nintendo 3DS gang rise up! Though it wasn't perfect or useful that much it was still a nice feature, made some fun photos with it.
I’m surprised that phone manufacturers don’t stick cameras on opposite ends of the phone to allow the quick capture of these.
They do? Well not exactly, but even with the multiple iphone cameras, you can do "spatial capture" (https://support.apple.com/guide/apple-vision-pro/capture-dev...), and see it on a Vision Pro. I don't know exactly how it works, presumably somewhat with classic stereo photography from the slightly displaced cameras (wide and ultrawide), and some more complicated computer vision tricks.
But yeah just having the cameras on either side would be great, although you couldn't play around with the distance as easily.
Hm I wonder if instead of the displacement, it’s using the difference in the standard and ultrawide field of view and doing some sort of interpolation of them? Maybe even pulling data from the lidar? The fake depth of field blur/bokeh uses some of this I guess.
I’d love to look at these pictures but have never succeeded. In particular, I find crossing my eyes to be a rather painful ( as in, it physically hurts ) experience so stop immediately if I try.
This is why I never do the cross-eyed version of focusing on these. For me that's always been uncofortable.
Using the parallel view, or focusing out beyond the picture to make the images converge has always been easier and more comfortable for me.
I can just about do it, but it messes up my ability to focus normally for ages afterwards. Doesn’t feel like it’s good for my eyes…
Once you learn how to cross view, you can cheat at those „find the difference“ quizzes: Cross view them and the differences will flicker.
I love that this comes up every so often on hacker news. I used love Magic Eye as a kid, and have been taking stereo photos on and off ever since. Experimenting with how to take them (moving camera, from a plane, etc), and how to view them (cross view, and putting them into Meta Quest).
Thanks for sharing!
My favourite related trick is to use cross viewing to solve find-the-difference puzzles.
Set up so the two images (with slight differences) are next to each other, cross your eyes, and look for 'shimmering' spots - these are the differences between the images.
It makes differences very easy to spot, which is pretty cool!
Those who fail to cross eye/parallel eye these images can try looking at these through a (DIY) binocular (empty pipes/used kitchen roll should work the same).
It will only work with parallel eye images though (at the end of this article).
This one looks weird when I look at it cross eyed, but fine when I look at the other way.
https://moultano.wordpress.com/wp-content/uploads/2025/02/17...
Yes, the ones at the bottom, in the "Parallel View" section are designed to be viewed the other way.
Sorry, I stopped reading part way and was just viewing the cool images! That will teach me!
I was confused that some were fine and others (the later ones) looked weird!
january 1983 issue of Creative Computing had an article on stereo vision, and I build the stereoscope in it, and wrote basic for my IBM PC to do this. Awesome!
https://www.atarimagazines.com/creative/v9n1/162_Stereo_grap...
What I find funny is that once you get the fix, you can move your eyes around the 3D picture without losing the fix
> Your screen can display 3D photos.
That's a stretch, but I guess clickbait is required to get engagement nowadays.
As long as you don't regret the click, I don't regret baiting the click. :)
> nowadays
People have been writing provocative titles for things since before the internet.
I want to know why cannot the multiple cameras on my phone be used to create 3D images.
They're not far enough apart. Stereo vision on humans is best inside 3-5 metres, usually, and by 10 metres out is not that great.
That's with pupils 65mm apart, give or take. Now scale that down to the horizontal distance between lenses on a phone. Coincidentally on mine they are about 13mm apart. You just multiply everything down linearly: my camera has the same experience at 60mm distance from an object as I do at 3m. It would be pretty useless past 2m, but also with those constraints you'd notice the quality drop off with range within a single object that was approaching 50cm depth.
Now that's OK itself, you could get some useful work done with that, but there are so many provisos to it that it would be very hard to sell as a feature. To top that off it's algorithmically painful. There's a reason the commercial 3D scanners don't (typically, only) use stereo pairs; there's almost always a better way to do it.
(oh, and to get out ahead of questions: Spatial Video capture on the iphone 15 apparently uses the LIDAR sensor for the depth map, not just the cameras)
They can, or? Apple calls that spatial videos, or does that work differently?
Yes, the spatial videos that iPhones can record are just stereoscopic video. This hasn't been cloned by other manufacturers because there have been very few viewing devices that they own, but that might change with Android XR.
Because they are not the same focal length? Although I'm sure software can mitigate that to some extent.
Is there a way to turn Apple special photos into cross view images?
Jokes on you! I can only see from one eye at a time!
Yep, same here. Discovered I was stereoblind quite late as well as I thought it was the norm and my brain just got used to it.
I thought this would be about wigglegrams
I found these all very easy to see on my phone screen. By the end of the article I was already able to instantly "go 3d" without thinking "go cross eyed" or use tricks like looking at my nose.
The problem I think most people will have is screen size. If the screen is too big then going cross eyed will be harder and might cause strain. Straight eyed can be easier, but I still think there's a limit on screen size. Magic eyes when printed are about the right size for people's heads so just work.
The second problem is the pictures look half as big as soon as you "go 3d". So although it's very cool and I buy the author's point that some pictures need the depth to make sense, there's always something lost from the full size picture. On my phone it made it like looking at a postage stamp!
Still I might try it myself given how easy it sounds. I imagine even a slight bit of wind would make some subjects impossible, though!
Is it just me or do they have some of the left/right images swapped?
The first set are (as the text says) designed for cross-eyed viewing, the others for parallel. These use opposite left/right arrangement. So if it feels like half of them are backwards, you are maybe using the same viewing method for both sets?
I just went back and doubled checked, and they're all correct. If they look inverted, you are probably diverging your eyes ("parallel view") instead of crossing them. If that's more comfortable for you there is a section of images arranged with that orientation at the end of the article.
Came to say the same. Depth is inverted for me. Is it because instead of looking at each picture with the eyes straight, we cross them (and each sees the opposite picture)?
If they look inverted you are probably diverging your eyes instead of crossing them. In that case the images in this section should look correct. https://moultano.wordpress.com/2025/02/24/you-should-make-cr...
Heh I thought you must've tested these on a teeny tiny monitor to make them work. No wonder! But of course when crossing eyes there is essentially no limit. Thanks for the clarification.
[dead]
[flagged]