Skip to Main Content
Need Support? Let’s guide you to the right answer or agent.
Status Needs review
Categories General
Created by Guest
Created on Jun 14, 2021

View Window Background: Synch to Google Earth or Cesium browser view?

It would be great to be able to synch Mstn's View Window Background to the browsers' Google Earth or Cesium camera view.

Mstn's View Window Background is already used to host raster data for Photomatching. It would be great to be able to synch the raster data to the browser's window; as well as the GIS coordinate info so that the user can orbit, pan, zoom the browser view from Mstn, using the geopositioning provided.*

But, the big drawback with modeling this way is that there is no depth culling. The Mstn elements behind trees or buidlings in the view would not be culled / overprinted by the foreground elements of the 'background'.

Looking at the VR/MR space, realtime depth culling is already possible.... it would be great to enhance the View Window Background to be less of a background and more 'holographic' in nature. Reading this post, there is a lot of 3d gITF info is being streamed to the browser that could be 'ripped'. Cesium apparently is looking to encorporate KTX 2.0 and glTF’s MESHOPT. Not sure if Google Earth will support these but they sound like something that could be leveraged to dynamically reference 3d mesh + textures in Mstn at the display / GPU level.

Bentley is already collaborating with Cesium and Mstn is now able to handle streamed HLOD data as imodel reference attachments, point clouds and rasters. The immersive 3d photographic / mapping info providing by Google and Cesium is currently a little disconnected from and underutilised in Mstn.

https://communities.bentley.com/products/betas/microstation_insiders/f/microstation-insiders-forum-1218741496/213826/view-window-background-synch-to-google-earth-or-cesium-browser-view

* The Ctrl-B View Background setting could be a drop down that allows the user to overlay photos hosted on the View Window Background so that those photos could be photomatched against the photographic info provided by Google Streetview etc or the underlying gITF mesh info. This would speed up the photomatching triangulation process by generating approximate camera / pose info that can then be fine tuned by the existing Photomatching tool.

  • Guest
    Reply
    |
    Jul 8, 2021

    Robert Jones 23 days ago

    For a low-fi version you could do what the NohBoard plugin does for OBS studio, where gamers stream what's being played on screen, but overlay it with another application (eg running a keyboard mirror). Except in this case it would be inverted, where the google image (or whatever windows application you nominated) would be streamed directly to the viewport background image...


    I do however think the photomatch tool needs a different mechanism for aligning the model with photos that is based on aligning edges that run to their vanishing points, rather than the point to point method that is unpredictable and very hard to do when the points in the model are not a perfect 1:1 dimensional match for the few fuzzy points you can see in the photo.