EDRA 48
Immersive Tangible Landscape Modelling: A Step Toward the Future for Integrative Ecological Planning
Payam Tabrizian, Brendan Harmon, Anna Petrasova, Vaclav Petras, Helena Mitasova
North Carolina State University
Good morning everyone and thank you x for introduction, today I will talk
about Immersive Tangible landscape modelling.
The talk is presented by the GeoForAll Laboratory at the
Center for Geospatial Analytics (CGA), North Carolina State University
CGA is an interdisciplinary research and education center with focus on
geospatial computing, modeling, analytics and geovisualization.
geospatial.ncsu.edu
Which is a project we developed at NCStates' GeoForAll laboratory to make landscape design process more effective through the use of Tangible interaction,
Immersive virtual environments, and geospatial analytics.
Geoforall lab is part of the Center for Geospatial Analytics, which focuses on geospatial computing, modeling, analytics and geovisualization.
Tangible Landscape
Immersive Virtual Environment (IVE)
I will specifically discuss why and how we coupled Tangible landscape- a tangible interface for GIS and an immersive virtual environment and to make ecological design process more effective,
and more imporantly how this technology can potentialy help bridging the gaps between experiential and ecological analysis of landscape.
Nassaur,1995; , Sarukhán and Whyte, Ode et al. 2008, Fry et al.
Along with our growing understanding about the impact of landscape exposure on individual’s cognitive, affective and physioloigcal well-beling,
a majority of recent investigations have aimed to find conceptual and emprirical common-grounds between landscape's ecological functioning
and experiential measures such as aesthetics and preferences, precieved restorativeness and so on.
Complementerary Frameworks such as Cultural ecosystem services, Gobster’s aesthetic-ecological paradigm,
Nassaurs’ cultural ecological theory have emphasized the importance of human-enviroment interaction and parallel analysis of experience and ecology.
However, when put in the context of landscape design and research, the ecological and experiential dimension reveal notable disparities in many asects. In terms of representation for instance,
we use realistic 3D renderings and perspectives and walk-through animations to represent the feel and experience of the landscape from the eye of beholder,
whereas we show and reason about the ecological functioning using 2 or 2.5 dimensional geospatial maps or schemas- with enough abstraction to infer global relationships
The way we measure and evaluate these dimensions are considerably different.
We work with community, stakeholders and decision makers to evaluate individual’s feeling of the potential designs,
whereas we objectively measure and deal with numbers and concerte maps to discuss how the water is regulated in the landscape,
how much pollution we can remediate or how certain landform configuration can impact soil erosion, sedimentation and so on.
Needless to say the software and tools behind working with these dimensions are considerably different.
Motivation Tangible Geospatial modelling interfaces
Interaction through mouse, keyboard and display does not encourage creativity.
Working with geospatial models and analysis is not intuitive and requires specialized software and training.
Collaboration is restricted as typically only one user at a time can navigate and modify models.
I am sure this photo shows a familiar setting - we often get together around a screen to solve a design problem or use mouse or touch
to manipulate 3D data on 2D screen. Such manipulation of data often requires knowledge of a specific,
often complex software, usually only single person can access the data
creating barriers to collaboration and creativity.
Tangible user interfaces can adress some of these issues. Instead of dealing complex user interfaces,
they afford more natural and intuitive interaction with geospatial data and analytics by offloading much of the cogtnitive load onto the body.
Tangible Landscape: real-time coupling with GIS
VIDEO
With Tangible Landscape you can hold a GIS in your hands - feeling the shape of the earth, sculpting its topography, and directing the flow of water.
My collegues at CGA cener we have developed a tangible user interface- so called tangible landscape.
We were able to take advatage of the fast and relatively accurate 3D scanning by Kinect
and developed the first system with real-time coupling of a 3D physical model, with GIS.
This video should give you a basic idea of the interaction - using a model of a real landscape,
we can modify the topography and get instant feedback on how our changes impact water flow and ponding.
The system is powered GRASS GIS, an open-source software for geospatial modelling and anaysis.
This affords us to flexibly integrate various algorithms and simulation ranging from water flow to landuse change modelling and even fire and disease spread modelling.
How it works
Tangible Landscape couples a digital and a physical model through a continuous cycle of 3D scanning, geospatial modeling, and projection
So how does the system work? In the previous slide you saw a Physical model of a landscape whichmade from kinetic sand.
This model is continuously scanned by the kinect, the scanned data are imported into GRASS GIS,
where a 3D digital elevation model is computed and a selected analysis or modeling is performed. A composite image
of the selected map layers is then projected over the model. In this way the system couples
the digital and physical models in a continuous cycle of scanning, modeling and projection,
providing the user continuous feedback.
Interactions
surface
points
lines
areas
areas
To make Tangible Landscape more flexible, we developed multiple ways to interact with the physical models.
Here we use tangible objects, like a wooden marker to specify point locations on the landscape,lets say view points or trailheads, single trees and etc.
Recently, we have started to experiment with using laser pointer to draw objects, such as points, lines or polygons.
Another option is to use colored sand to create polygons where the color represents certain attribute of the polygon and the height of the sand can represent intensity of that property.
Our most recent interaction is creating areas using colored felt of different shapes placed on the model.
These interactions can be combined to achieve intuitive interactions for particular application.
Now I will show you some of the applications we developed for different study sites, using different geospatial models and each of them has different type of interaction.
Applications: visibility
Your browser does not support the video tag.
Visibility analysis
Topography is directly linked to visibility, so here we explore viewsheds at our campus.
The physical model from sand represents digital surface model with canopy and we place the markers to specify viewpoints.
Once the marker is detected, the viewshed is dynamically computed and visualized, here the visible areas are represented by yellow color.
Applications: urban growth
Your browser does not support the video tag.
Simulation of urban growth scenarios with FUTURES model
We coupled TL with urban growth model called FUTURES which is developed at North Carolina State University.
By placing colored sand we create red zones which attract new development or green zones for conservation.
The height of the sand can represent the intensity - in other words, how much the zone attracts the development.
Then we identify the polygons and rerun the urban growth model with these new conditions.
After the users remove the sands, you will observe the animated growth of the city as predicted by the FUTURES based on the specified interventions.
Serious games: coastal flooding
Save houses from coastal flooding by building coastal defenses
Structured problem-solving with rules, challenging objectives, and scoring
We thought Tangible Landscape would be a great tool for serious gaming- an emmerging field and promissing medium to engage public in science.
We prepared a coastal flooding game for a public event. The model is created from a digital surface model of Baldhead Island in North Carolina's outer banks. We asked players to protect the residents on the coast when a foredune is breached during a storm surge.
With limited sand budget they tried different ways of building barriers and, we were surprised by how quickly they learned how a breach in one place can cause flooding of houses which are far away from the breach.
Perspective view of inundated landscape Surface inundation and flow model
As you have seen so far, Tangible Landscape represents the landscape as a projection-augmented model which is perceived in a bird’s-eye perspective.
So it is not capable of fully represent the experience of landscape, in the way that we perceive it in human view.
Immersive Virtual environments (IVE)
Immersive Virtual Environements surround the user in images, video or other stimuli to generate a perception of being physically present in a non-physical world.
High degree of "presence", more robust asessement of human perception and preferences
http://marclee.io/en/10-000-moving-cities-same-but-different-vr/
Immersive virtual environments are powerful tools to represent the landscape with the fine granularity close to that of what is experienced from human-view.
IVE’s surround user with continous stream of stimuli, tied to the users head or body movement, creating a feeling being physically present in a virtual world.
They are shown to elicit a high degree of presence and immersion, and very robust tools for assessing perceptions.
The coupling rationale
Real-time updating a georeferenced 3D model of the landscape based on user interaction with Tangible Landscape
Updating the attributes (shape, position) of 3D objects (e.g., plants) and surfaces (e.g., terrain) with their corresponding tangible objects
Enabling user to control the viewpoints (camera position) and animation (e.g., walkthrough, flythrough)
So how we coupled these two technologies ?
The coupling concept is based on adaptive 3d modelling framework. The idea was to generate a georeferenced 3D model of the under-study landscape,
in which all the features and behavior of 3D elements like trees, buildings and surfaces are linked to their corresponding tangible object in
tangible landscape. In this way, as users manipulate the tangible model and pieces, they can see, in real time, the changing landscape rendered on display or through virtual reality headsets like oculus.
Additionally, we wanted to allow users to control the camera and animation so they can step into and navigate in their desired location in the landscape.
Physical setup
For implementing the concept, we added a 3D modeling and game engine software, called blender,
to the tangible landscape setup with outputs to a display and an immersive virtual reality headset.
What is Blender? Why Blender?
Free and open source 3d modelling and game engine software
Easy scripting (Python)
GIS and Virtual reality plugin
High-quality real-time rendering and shading
What is Blender? Why Blender?
Blender is a free and open source program for modeling, rendering, simulation, animation, and game design.
The software has an internal python-based IDE and add-ons for importing GIS data to georeference the scene, and displaying the viewport in HMDs.
It also allows realtime high-quality rendering and shading.
Software Architecture
Briefly describing the workflow, GRASS GIS and Blender are loosely coupled through file-based communication. As user interacts with the tangible model or objects, GRASS GIS sends a copy of the geo-coordinated information or simulation to a specified system directory.
We implemented a monitoring module in blender scripting environment that constantly watches the directory, identifies the type of incoming information, and apply relevant operations needed to update the 3d model. The input data can range from geospatial features like a raster or a point cloud, simple coordinates as a text file, or signals that prompt a command such as removing an object from the scene.
Landform and water bodies
Interaction: hand, sculpting knife
3D processing: terrain GeoTIFF raster and water polygon
Simulation: Water flow (r.sim.water), Ponding (r.fill.dir)
Projection: Water Surface Area, Mean depth
Now lets see how some of the landscape features are processed through the application.
For example, when landscape is manipulated with hand, a geotiff raster and a polygon related to water is processed.
As users carves the landscape, Water flow and accumulation simulations are continuously projected onto the physical model.
Numeric indicators about the depth and surface area of the retained water is projected.
At the same time, point-cloud and water polygon is transferred to blender update the 3D model.
Vegetated surfaces
Interaction: Felt pieces, laser pointer
3D processing: Importing and populating species classes using the plants library
Simulation: Complexity, Heterogeneity, Biodiversity, Remediation capacity, Landscape structure analysis (r.li)
Projection: Percent remediated, No of patches, patch richness, Shannon Diversity,
Users can design tree patches using colored felt pieces. They can either draw and cut their prefered shapes using scissors, or select from a library of cutouts with various shapes.
Each color represents a landscape class based on National landcover datast classification, like decidous, evergreen etc. For instance in this example green denotes evergreen class and
eastern pine trees, red means decidous and red maple, blue represents wetland species and river birch.
Grass GIS applies image segmentation and classification to the scanned image to assign RGB values to their corresponding landscape classes.
Using landscape structure analysis we compute and project various metrics related to landscape heterogeneity, biodiversity and complexity,
which as you can see is projected below the landscape model.
After importing, Blender applies a particle system modifier to populate corresponding species in each patch based on a predefined spacing and density,related to each specie.
Some degree of randomness is applied to the size, rotation and sucsession of species to mimic the realworld representation of a patch.
Linear features, pathes
Interaction: Wooden markers, Laser pointer
3D processing : Importing polyline shapefiles and extrusion based on patch profile, assigning animation and camera path
Simulation: Traveling salesman (Python heuristic), Least-cost-path analysis (r.walk), Slope analysis
Feedback: Trail profile, slope, least-cost path
(Show the trail and wooden cubes.)
Additionally users can uses tangible objects, in this case wooden cubes to designtate a pathway, in this example a baord walk.
As user inserts each of the chekpoints, Grass GIS, recalculetes and projects an optimal route using an algorithm that computes the least cost walking path.
A profile of the road and the slope of the segments are projected as feedback (show them).
Additionally, the polyline feature is processed in Blender as a walktrough simulation that can viewed on screen or in HMD.
Human views
Interaction: Wooden marker, Laser pointer
3D processing : Importing polyline shapefiles and extrusion based on patch profile, assigning animation and camera path
Simulation: Viewshed
Feedback: Viewshed area, depth of view, viewdepth variation
The 3D model is interactive so anytime during the interaction users can freely navigate in the environment and explore diffrent vantage points using the mouse.
But we wanted to keep that feature interactive as well. We used wooden marker with colored tip, that denotes the viewers location and direction of view.
The feature is exported as a polyline feature. Once imported in blender, The scene camera is then relocated to the line’s starting point and the direction of view is aligned to the line’s endpoint.
Immersion
Using a virtual reality addon, blender viewport is continuously displayed in both viewport and headmounted display,
so users can pick up the headset and get immersed in their prefered views.
One additional camera is also set to follow the imported trail feature to initiate a walkthrough animation if required.
Realism
Interaction: Blender GUI
Modes: real-time abstract (Low-poly), realtime semi-realistic (viewport shading), Photo-realistic (radiosity, Vray)
Optionally, user can manipulate degree of realism. We assigned each 3D feature from the sky to the trees to a low-poly counterpart.
Both low-poly models and blender scene are rendered in realtime and update almost instantly.
Realism
To gain even higher realism user can switch to cycles render that produces a much higher resolutions and rendering fidelity.
(Optional) We believe that manipulating realism can be very helpful to quicky change the volume in the early design phases,
or it can be used as a powerful medium to communicate landscape science with younger age groups.
Future work
User studies (Creativity, Problems Solving, Decision-making, Colloboration, Participation)
Completing the features library (plant speceis, urban features)
Research application
While we are constantly working to upgrade the features, our first objective is to conduct user studies.
Currently we are collaborating with a psychologist to design experiments to test the effectiveness of the system on creativity, problem solving and learning.
Specially, we are very curious to see how this application can help collaborative decision making and trade-off assessment that is how stakeholders, designers and scientists
can collaboratively work together to find win-win situations between ecologic and experiential aspects of understudy landscape.
Resources
If you are interested to learn more about Tangible landscape, These are some useful resources that can get you started.
While I am taking the questions, you can look at this video to see how an ecological scientist and designer work together to design a landscape.
Through the design process, please note that how the developments enables the dialogue between ecological assessment and aesthetic evaluation.