top of page

Exploring the future of media with Tobias Stärk

  • Writer: Live team
    Live team
  • Apr 16
  • 8 min read


Interview Oliver Webb


Tobias Stärk is a creative technologist with experience in a broad spectrum of media creation, including extended reality, visual effects, real-time rendering, virtual production and interactive media. Having worked on acclaimed projects as a VFX artist, including Guardians of the Galaxy Vol 2 and 1899, he has a long-term fascination with virtual and mixed reality, and how to interact with digital content in the physical world while keeping a close eye on developments in the field of AI. Stärk sits down with LIVE to discuss his career to date, as well as his thoughts on the future of this innovative tech.


How did you first get involved in the world of VR?


I have been interested in these new technologies from the very beginning. I studied film back in 2007 and in 2009 I started my bachelor’s degree at the SAE University College in Australia, around the time Avatar was released. So there were some stereoscopic films already happening, and I was impressed by how you could show 3D content on a screen. In the phase before this, people had been experimenting with these different-coloured anaglyph glasses. I was really interested in using this for film, and my bachelor’s thesis was on pre-production for stereoscopic films.


My first real VR project came when I moved to Berlin and worked on a stereoscopic 360° commercial for Samsung and BMW, filmed in Portugal with a go-kart loaded up with cameras. Back then, this was for the Samsung Gear VR, which you need to connect your phone to. It was fun learning all the ins and outs of filming and doing post-production in stereoscopic 360° with almost no resources. I also bought an Oculus Rift DK2 headset around that time, so my interest in VR took off.


You’ve worked extensively with real-time rendering. Could you talk us through your experience?


For many years, I was a VFX artist, working as a compositor on film, TV, adverts and all sorts of media. I was interested in real time and XR, and got into Unreal and Unity. I launched a VR pet project for an art exhibition, combining VR and analogue printmaking to create a photo darkroom that could show people how it was originally done. Nowadays, it’s hard to access a lab where you can learn this if you don’t live in a big city, and it’s good for understanding where some of the tools in Photoshop, like Dodge and Burn, come from. The process is very analogue; for example, you need to be careful about when you can turn on the lights again, and there’s a proper order of steps to follow to achieve a good result.



I made a few conscious decisions to make it more immersive, which included adding dials and controls, not having a teleportation function to force you to physically move around the room and no shortcuts like faster chemical processes. I also included real-life consequences: switching on the lights will ruin your undeveloped photos, for example. There’s lots more I want to do with it, and I hope I can dedicate more time to it. This was a fantastic use case for VR – I showed it at two exhibitions and it was well received, with people from far away travelling and queuing.


In the VFX industry, real-time engines such as Unreal became relevant when render quality was good enough that you could use it for various projects with a fraction of the render cost and time it took traditional offline renders. I’ve always loved the fact that an engine designed for making games could be used to create linear content, too.


I was later hired by Framestore to work on the Netflix series 1899, which was shot on a big LED screen with in-camera VFX. They asked me if I wanted to be part of a small ninja team and essentially be boots on the ground in Berlin with the rest of the team based in London. Of course, I immediately said yes. My main role was to assist the director and main creatives, such as the DOP and production designer, with virtual scouting. This meant collaboratively moving around a virtual set in VR and using this to plan the shoot on the big LED wall.


During production, I took an on-set role, where I and the team at the ‘brain bar’ were running the LED wall – this massive, 52m-long, 7m-high curved display. After I finished that, I joined the studio Dark Bay, where we shot 1899, as creative tech lead to work on one of the biggest LED stages in the world. Having the keys to a state-of-the-art studio was amazing, learning everything from the ground up about how to make Unreal work as a real-time engine for ICVFX.


After being with Dark Bay, I left to join Woodblock Animation Studio as head of XR and real time.


Can you give us an example of where real time, VFX and XR come together in your daily life?


At Woodblock, I worked on our first project for Sphere. We created a spot for Aston Martin displayed on the outside of the spherical LED screen in Las Vegas during the Formula 1 race weekend. The cars in the spot were rendered in Unreal, combined with Houdini cloud simulations. Since it was our first project for such a special screen, we debated a lot about how to best use this canvas for maximum effect and even built a web viewer in Babylon.js you could use to play the spot, fly around and jump to specific locations. A few projects for Sphere’s exterior later, and this simple viewer is still being used for internal review purposes – and with clients. It’s easy to use and runs on almost any device.



For a more recent project, we had to produce a lot of content for a live show and used VR extensively to review the shots and simulate how the show would look in the end. Since most software packages can talk to each other nowadays, you can do fun stuff like stream your viewport from your compositing tool or renderer with NDI directly to the virtual media plane in VR, and teleport around the venue to see it from new angles.


In the early days of production, it’s crucial to understand the limits and challenges as fast as possible. I see a lot of good use cases in XR tech, especially live entertainment – be it a concert, theatre or immersive art exhibition – to preview what you’re doing by going into a virtual space and experiencing how it will look.


In my opinion, as soon as the final product isn’t going to be watched on a flat two-dimensional screen at arm’s length, it’s worth thinking about VR.


You’ve worked across quite a wide variety of mediums. Is there a particular area that you prefer working in?


I don’t have a specific favourite, but I like the live components. During live shows, you can immediately see how different people are reacting to things, and they have this shared experience. It’s more interesting with live content because things can go wrong and it’s a bit nerve-racking. I think readers of your magazine will probably know and understand this.


What I enjoy most is combining all my different interests and finding the links between these different mediums. For example, on live projects, VR isn’t always the final output medium, but it can be used as a tool to make the final show better. I use my skills from virtual production, VFX and VR for the big screen and it all comes together in the end to make a new project.


I don’t have one favourite medium, but the biggest is probably VR – I think it’s still got so much potential. But I enjoy the combination of working across different mediums.


Do you have any favourite live projects you’ve worked on?


I would count 1899 as a live project because our job was to make it work on-set, while 100 people were running around the studio. It was cool, making Unreal work for this environment. It was super interesting to be at the production in the studio and play my part to make the show shine.



There was also an Audi project for the OMR Festival in Hamburg, a huge marketing conference. We developed an interactive experience in which people could sit in car seats, then they would have screens come down around them and would ‘drive’ through a live rendered scene which could react to the moods of each person.


We had face detection paired with mood detection, which was really cool. There was even a heartbeat sensor which also influenced the visuals shown. The environment would change when someone’s heart rate went up or down. The mood detection would check if they were happy, sad, scared or something in between. It was great to create this unique experience. We also had limited time, as is usually the case with these commercial projects, where you’re also working with limited resources. To make this all work in such a short time frame and turn it into a cool experience that everyone loved was fantastic.


What do you think the future of VR tech holds?


That’s a good question. I still think it’s underused because it’s a medium where you can spatially interact with 3D content. What would be ideal is to have lightweight glasses or contact lenses that give you the option to add a digital layer to your peripheral and collaborate with others in it.


The future is definitely going to be less about bulky headsets and more towards virtual content in the physical world in everyday life, for immersive experiences in entertainment and education.


How are developments in AI technology impacting your work?


I’m very optimistic about Generative AI, maybe more so than others in my industry. There are quite a few sceptics around, especially in the animation sector of the industry, and some people discard it and don’t want to use it at all for their own good reasons. It’s having a lot of impact at the moment because there is so much new stuff coming out, even day-by-day. But I think there’s some good use cases, and that the need for digital content is higher than ever. I don’t see it as this thing that’s going to take our jobs. It’s more going to supplement our jobs, and can be utilised to give us new tools and freedom to experiment with new methods. I’m generally more open to this than others might be.



Then, there are the large language models, such as GPT-4/ChatGPT or Deepseek. I use ChatGPT like everyone else, I think – mainly for getting clear, structured answers to specific questions without having to browse through different online resources; to help with specific coding problems or organising data. I try to use it more and more for manual, labour-intensive tasks that would distract me from the actual work.


I think AI will be at the centrepiece of everything we do and how we use tools going forward. We’re probably not going to be talking about AI as much in ten years’ time because it’s just going to be everywhere. In the future, it’s going to be obvious that some AI is involved in whatever you do.


Do you have any other projects on the horizon for 2025?


On the topic of AI, I want to do a deep dive into Gaussian splatting next. I’ve seen with these neural renderings that there’s still a lot of potential there for live entertainment or immersive experiences.


I have some other projects floating around, but nothing is set in stone just yet that I can talk about. My goal is to work on more amazing live shows and immersive/interactive XR projects this year – anything that’s not been done before and is technically challenging.


This feature was first published in the Mar/Apr 2025 issue of LIVE.



Latest posts

LIVE_MAR-APR_2025_01.jpg
LIVE
NEWSLETTER SIGN-UP

Get the latest updates from the world of live audio-visual technology.

 

Subscribe to the LIVE newsletter and you'll also get the latest issue of the magazine straight to your inbox.

By providing your information, you agree to our Terms & Conditions and our Privacy Policy. We use vendors that may also process your information to help provide our services. You can opt out at any time.

Follow us

  • LinkedIn
  • X
  • Instagram

© 2025 Bright Publishing Ltd

bottom of page