There's a general problem in astronomy, which is that we almost totally lack depth perception. This is not to say that we don't know the distances to things, for we frequently do. That's what the cosmic distance ladder is for. However, except for quite nearby things, for which parallax measurements can be done, you're quite some ways up the ladder and you often have only approximate distances only for certain types of objects.
Case in point, at some journal club talk last year a number of interesting conclusions hinged on whether a chunk of radio emission was coming from a particular galaxy however many Megaparsecs distant, or else was far in front of or behind it. For the galaxy, the distance is reasonably well known (certainly via the cosmological redshift, possibly via other means as well). For the radio emission, not so much. It was a continuum source, which means there were no spectral lines to give a redshift, and it wasn't a galaxy, which rules out most of the other higher rungs of the distance ladder.
With present techniques, the Magellanic Clouds are the only galaxies for which one could really conceive of getting parallax measurements. This would have to be done using long-baseline interferometry of radio point sources, of course, using something like the VLBA. The limit is set by the angular resolution of your instrument, since the parallax is nothing more than measuring a (tiny) angle on the sky. For the VLBA, observing at 10 cm from stations around 10,000 km apart, you can get about a milli-arcsecond. Using the Earth's orbit as your separation, that gets you out to a few kiloparsecs.
What could you conceive of building, anyway? If you want a super-long baseline, you need to stick to radio techniques, where you record the waveforms and feed them into a correlator elsewhere. Electronics are getting better, so I can imagine that working up to several hundred GHz, so millimeter waves. We're pretty good at chucking things into solar orbits, and at powering things off solar energy at Earth-like distances from the sun, too. So I can conceive of building a millimeter wavelength interferometer array with a baseline of a couple of AU. And that gets you to about a nanoarcsecond. With this kind of resolution, you could just resolve a penny held up to the sun by an astronaut at Saturn. (Or, someone else with this telescope could see the city lights of Earth from halfway across the Galaxy.)
If you consider that the Solar System moves at about 220 km/s around the galactic center, if you're willing to wait a year as with traditional parallaxes, you get a baseline of 50 AU or so. In principle, you can then measure a parallax out to basically the edge of the universe, 50 Gigaparsecs or so, although you'll have trouble defining a fixed background if you do that. However, this probably wouldn't help with the sort of diffuse source that I started out discussing.
I often wonder if this would work. Measure distance by defocusing an interferometer. By this I mean, interferometric correlators work on the assumption that the incoming wavefronts are flat. The constant-phase surface of a radio signal leaving a point source is actually a sphere (usually), but at a distance of light years, you don't especially care. But I guarantee that you have seen this effect before.
Turn the focus knob of a pair of binoculars, or one of those old cameras that actually made you focus it yourself. Objects at one distance will appear crisply, while objects in the foreground and background become fuzzy. The optics of the focus mechanism are compensating for a specific amount of this wavefront curvature. You would be correct in imagining that this could be used to measure distance, but only out to a certain maximum.
If you push the focus knob all the way to one end, generally marked as "infinity", then everything beyond some distant point will be in focus. Past that is the far field of your device, beyond which the wavefront curvature doesn't matter, and because of which nobody worries about depth of field when photographing landscapes or nebulae. The far field distance is roughly the square of your aperture size divided by wavelength. For your binoculars (3 cm, 550 nm) it's a kilometer or so. For the VLBA (10,000 km, 10 cm) this is about a tenth of a light year, but for our really ambitious yet conceivable telescope (2 AU, 1 mm) this becomes a few Gigaparsecs.
Now, I think an algorithm based on this technique would probably work, even on the diffuse cloud discussed above. You just have to adjust the depth of field until the cloud is at its smallest. You also wouldn't need a fixed background against which to compare (usually distant quasars today, but they probably wouldn't be distant enough for this kind of work). Now, it's rather complex to make a good interferometric image of a spread-out thing, but my understanding is that it's possible. Maybe some of the radio astronomers reading this will set me straight if not.