IARPA researchers want immersive, 3D imagery for training and simulation
The project seeks software and applications to enhance the intelligence community’s surveillance images.
Intelligence community researchers want to make it easier for operators, law enforcement, and rescue personnel to scope out difficult terrain before setting out on a mission, according to a new solicitation.
An Intelligence Advanced Research Projects Activity broad agency announcement, which was issued in partnership with the Department of Interior, calls for the development of software systems “that can create photorealistic, navigable site models using a highly limited corpus of imagery” taken from the ground, traffic cameras, unmanned aerial vehicles, and satellites.
The WRIVA project, which stands for “Walkthrough Renderings for Images of Varying Altitudes," aims to enhance the intelligence community’s imagery surveillance to make them more realistic.
But to do that, they need data and the plan is to collect data to support the research and development, particularly with specific locations that would be most applicable, such as scenarios like refugee camps because they are stood up quickly and needs change just as rapidly.
“This research program is aiming to fuse imagery collected at a variety of altitudes – that includes ground level cameras, traffic cameras and satellite imagery – and collected at a variety of angles and viewpoints in order to build an immersive virtual environment for locations around the world that can be challenging to access,” Ashwini Deshpande, WRIVA’s Program Manager, told reporters on June 17.
“We're hoping that this capability will allow potential operators a lot of insight into the locations that they are otherwise unable to go to before they need to conduct the mission.”
The idea is to have a technology that could be used by law enforcement, military, or aid workers to familiarize themselves with a location before their arrival and potentially “revolutionize” training, planning and modeling.
“These groups often have to deliver rapid support, life saving aid to unfamiliar or dynamic areas. Allowing them to prepare ahead of time helps keep them out of harm's way when they have to conduct these activities,” Deshpande said.
Adam Norige, who leads the MIT Lincoln Labs Group's WRIVA effort said the technology could also improve accessibility and broaden the user base.
“Because right now you can develop these synthetic, immersive environments, using other technologies, but it does require a high level of expertise and a considerable amount of technology to do that,” he said. “I mean, it is possible. But this will definitely bring it to a whole new level of accessibility, which I think will broaden the use of and allow many use cases.”
There’s also the potential to merge the technology with other platforms, such as virtual reality.
“So we've taken these environments and moved them into the VR space. And one of the things that we're really focused on now is using data from sources like what WRIVA will be able to provide to build immersive environments, that allows really the US disaster relief enterprise to respond in a much more effective way,” Norige said.
“Because right now, if there was a large disaster somewhere in the U.S., a lot of people move into that area, so they can do a lot of the inspections in person—they drive out to the damage sites, they take pictures, they throw tape measures into holes and try to quantify the damage there. But what we see is moving those capabilities into if you want to use the term 'Metaverse,' or into [a] kind of virtual reality are these immersive environments. You can actually expand your kind of the workforce and allow really rapid response. And I think it would make the US much more agile in ways that we are certainly trying to aspire to be right now.”
Research is expected to start this fall and continue for three and half years. The BAA closing and proposal date are due by August 5, according to the documentation. Proposer questions are due by July 11.