Brian McClendon’s Post

View profile for Brian McClendon

SVP Engineering, Venture Partner, Board Member

Starting today, Niantic, Inc.'s Scaniverse on iOS is offering unlimited Gaussian Splat generation - on-device, in about a minute. I’ve been working in 3D graphics for 38 years now, and I don’t think we’ve seen anything this transformative since real-time texture mapping in 1991. Interestingly, splats were also introduced in 1991 but were too slow to be useful. A new, faster version was introduced at Siggraph in 2023 and surpasses NERFs in both performance and usability.  More specifically, they can be fully interactive and replace meshes in many circumstances with better results. A few cloud-based services are offering a “scan & upload’ implementation, followed by 30-60 minutes of processing time. Scaniverse will give you a completed Splat before others can finish uploading their data. Scaniverse is 100x faster, and it’s free. Having nearly immediate feedback - while you’re still on location and in the moment - greatly improves the quality and hit rate of your captures. It’s the difference between One Hour Photo and digital cameras. Because some things can only be captured ‘now’. At Niantic, Inc., we believe the future of real-world reconstruction will be radically changed by Gaussian Splats, and that Scaniverse will radically change how you see the world in 3D. Give it a try!

Brian McClendon

SVP Engineering, Venture Partner, Board Member

1w

And here's a video of the acquisition experience. https://www.youtube.com/shorts/KE6bnojBwMg

Brian McClendon

SVP Engineering, Venture Partner, Board Member

1w

Here's an example of real time interactivity (in 8th Wall) of a Scaniverse splat! https://8w.8thwall.app/splat-cactus/

Jack Haehl

Computer Science and Physics Undergrad | XR Developer/Researcher | Bringing the future to you

1w

This is fantastic! I've been trying it with many objects in my house and it works so well even on an older iPhone! My only request is that I wish we could exclude captured points from the final output via a bounding box or something. I would love to be able to eliminate the wall in the background from the final output when scanning just a single object for instance.

Mike Rawitch

🚀A.I. + Drones/Satellites = Innovative Sustainable Solutions 🌎

1w

Hey Brian - could we get a demo of this?

Jeff Sipko

Founder @ Solipsist Studios. Ex-Hololens. Ex-AWS.

1w

The speed of the training and rendering here are both insane! Any papers coming out about how this was achieved?

Keiichi Matsuda

Design for New Realities

1w

Wow this is huge!!

Anthony Maes

Co-founder @ dejavu 🐈⬛

1w

Impressive, congrats to Keith and team, can't wait to see what this tech does to the wayspots in the VPS reality browser

Tim Reha

Creative Technologist, Digital Strategist, Product Marketing, Business Development, Partnerships, Generative AI Pipelines, Spatial Media, Video, Live Streaming, Spatial Commerce.

1w

Nice excited to try this out

Markus Lanxinger

Founder & CEO at Plinth

1w

Seriously impressive stuff. And the first on device capture and training application without any cloud involvement. That’s a first. And the speed and quality all on an iPhone. Mind blown 🤯

Sagar Patel

Freelance | Real-time Graphics / VR / AR / XR

1w

I'd love to try this, any plans on bringing it over to Android?

See more comments

To view or add a comment, sign in

Explore topics