Country
Full text data for US and EP
Status
Type
Filing Date
Publication Date
Inventor
Assignee
Click to expand
IPC
No.
Publication Number
Title
Publication/Patent Number Publication/Patent Number
Publication date Publication date
Application number Application number
Filing date Filing date
Inventor Inventor
Assignee Assignee
IPC IPC
1
US10235566B2
Publication/Patent Number: US10235566B2
Publication date: 2019-03-19
Application number: 15/656,771
Filing date: 2017-07-21
Abstract: Systems and methods are disclosed for utilizing digital aerial images to determine stockpile volumes. In particular, in one or more embodiments, the disclosed systems and methods utilize a UAV to capture a plurality of digital aerial images portraying stockpiles on a site. The disclosed systems and methods can generate a two-dimensional representation of the site and a three-dimensional representation of the site based on the plurality of digital aerial images. Moreover, the disclosed systems and methods can apply an elevation filter to the three-dimensional representation of the site to identify potential stockpiles. Further, the disclosed systems and methods can utilize a neural network to analyze two-dimensional features and three-dimensional features of the potential stockpiles to identify final stockpiles. The disclosed systems and methods can then utilize three-dimensional stockpile representations to determine stockpile volumes. Systems and methods are disclosed for utilizing digital aerial images to determine stockpile volumes. In particular, in one or more embodiments, the disclosed systems and methods utilize a UAV to capture a plurality of digital aerial images portraying stockpiles on a site. The ...more ...less
2
US20190088011A1
Publication/Patent Number: US20190088011A1
Publication date: 2019-03-21
Application number: 15/986,947
Filing date: 2018-05-23
Abstract: The present disclosure provides a method, a device, a terminal, a system and a vehicle for visualization of vehicle's blind spot. The method includes: acquiring real-time image data in front of the vehicle; creating a Three-Dimension (3D) map with the acquired image data; determining a current location of the vehicle in the 3D map; acquiring the relative location information between the driver and the sight-blocking object that blocks the vision of the driver; determining the blind spot in the vision of the driver in connection with structure information of the sight-blocking object and the relative location information between the driver and the sight-blocking object that blocks the vision of the driver; and determining environmental information of the blind spot in the vision in accordance with the 3D map, the current location of the vehicle in the 3D map and the blind spot in the vision of the driver. The present disclosure provides a method, a device, a terminal, a system and a vehicle for visualization of vehicle's blind spot. The method includes: acquiring real-time image data in front of the vehicle; creating a Three-Dimension (3D) map with the acquired image data; ...more ...less
3
US20190035143A1
Publication/Patent Number: US20190035143A1
Publication date: 2019-01-31
Application number: 15/959,684
Filing date: 2018-04-23
Assignee: HOVER INC.
Abstract: A system is provided including a database that ingests data from disparate image sources, with a variety of image metadata types and qualities, and manages images geospatially through the creation and continued refinement of camera solutions for each data object included. These camera solutions are calculated and refined by the database as additional data enters the system that could affect the solutions, through a combination of the application of image metadata towards image processing methods and the use of optical-only computer vision techniques. The database continually generates data quality metrics and relevant imagery and geometry analytics, which drive future collection tasking, system analytics, and human quality control requirements. A system is provided including a database that ingests data from disparate image sources, with a variety of image metadata types and qualities, and manages images geospatially through the creation and continued refinement of camera solutions for each data object included. These ...more ...less
4
US20190004197A1
Publication/Patent Number: US20190004197A1
Publication date: 2019-01-03
Application number: 15/780,578
Filing date: 2016-03-09
Abstract: Systems and methods for updating models of complex geological sequences using a sediment accumulation rate volume derived from a depth view and a time view of the complex geological sequences.
5
US20190026938A1
Publication/Patent Number: US20190026938A1
Publication date: 2019-01-24
Application number: 15/653,442
Filing date: 2017-07-18
Abstract: The subject system provides for the generation of a three-dimensional (3D) model from optical capture including the generation of the 3D model from point cloud data migration. The subject system includes generating a point cloud based on optically captured data of a project site. The subject system also includes generating raster information from the point cloud, and creating surface contours based on the raster information. The subject system also includes creating a 3D model based on the surface contours. The subject system provides for the generation of a three-dimensional (3D) model from optical capture including the generation of the 3D model from point cloud data migration. The subject system includes generating a point cloud based on optically captured data of a project ...more ...less
6
US20190051041A1
Publication/Patent Number: US20190051041A1
Publication date: 2019-02-14
Application number: 15/671,280
Filing date: 2017-08-08
Abstract: A method and apparatus for automated projection mapping previsualization is provided. A computer model of an object is received, at a controller of a device, from a publicly accessible remote mapping server, the computer model comprising a publicly available three-dimensional computer model, the computer model defining an object in geographic coordinates and elevation coordinates, the object located at given geographic coordinates. The controller generates a time dependent previsualization projection mapping model for the object using images to be projected onto the object, the computer model, and data for generating one or more of Sun behavior and the Moon behavior at the given geographic coordinates. The controller controls a display device to render a previsualization of the time dependent previsualization projection mapping model. A method and apparatus for automated projection mapping previsualization is provided. A computer model of an object is received, at a controller of a device, from a publicly accessible remote mapping server, the computer model comprising a publicly available three-dimensional ...more ...less
7
US20190051042A1
Publication/Patent Number: US20190051042A1
Publication date: 2019-02-14
Application number: 16/055,456
Filing date: 2018-08-06
Abstract: A ceiling map building method includes estimating a scale of each ceiling image based on information related to the ceiling image and information related to another ceiling image including a same object as included in the ceiling image, the scale being represented as a ratio of an amount of movement of the object between the two ceiling images to an amount of movement of the camera (6) between the positions thereof when the two ceiling images were respectively captured (ST16), and building a ceiling map (2) (ST2) by converting the ceiling images in accordance with the respective scales so as to have sizes suitable for the ceiling map and combining the converted ceiling images (ST84). A ceiling map building method includes estimating a scale of each ceiling image based on information related to the ceiling image and information related to another ceiling image including a same object as included in the ceiling image, the scale being represented as a ratio of ...more ...less
8
US20190050000A1
Publication/Patent Number: US20190050000A1
Publication date: 2019-02-14
Application number: 15/671,743
Filing date: 2017-08-08
Assignee: Skydio, Inc.
Abstract: An autonomous vehicle that is equipped with image capture devices can use information gathered from the image capture devices to plan a future three-dimensional (3D) trajectory through a physical environment. To this end, a technique is described for image-space based motion planning. In an embodiment, a planned 3D trajectory is projected into an image-space of an image captured by the autonomous vehicle. The planned 3D trajectory is then optimized according to a cost function derived from information (e.g., depth estimates) in the captured image. The cost function associates higher cost values with identified regions of the captured image that are associated with areas of the physical environment into which travel is risky or otherwise undesirable. The autonomous vehicle is thereby encouraged to avoid these areas while satisfying other motion planning objectives. An autonomous vehicle that is equipped with image capture devices can use information gathered from the image capture devices to plan a future three-dimensional (3D) trajectory through a physical environment. To this end, a technique is described for image-space based motion ...more ...less
9
US20190043251A1
Publication/Patent Number: US20190043251A1
Publication date: 2019-02-07
Application number: 16/053,971
Filing date: 2018-08-03
Abstract: Methodologies, systems, and computer-readable media are provided for generating a dynamic 3D communication map. Data is collected from a number of autonomous drones as they navigate through a particular area. The autonomous drones can collect environmental data, location data, signal strength data, signal usability data, etc. and transmit that data back to a computing system. The computing system analyzes the data received from the autonomous drones and generates a dynamic 3D communication map that indicates the signal strength and/or the signal usability of various wireless communication signals as a function of time and other variables. Methodologies, systems, and computer-readable media are provided for generating a dynamic 3D communication map. Data is collected from a number of autonomous drones as they navigate through a particular area. The autonomous drones can collect environmental data, location data ...more ...less
10
US20190054938A1
Publication/Patent Number: US20190054938A1
Publication date: 2019-02-21
Application number: 16/166,945
Filing date: 2018-10-22
Abstract: The present disclosure relates generally to vegetation detection and, in particular, to a vegetation detection and alert system for a railway vehicle
11
US20190072390A1
Publication/Patent Number: US20190072390A1
Publication date: 2019-03-07
Application number: 15/697,255
Filing date: 2017-09-06
Abstract: In aspects of visual mapping of geo-located tagged objects, a mobile device includes a mapping module that receives map data of a geo-fenced area from a controller device of the geo-fenced area. The mobile device can scan for wireless-enabled devices within a geo-fenced area, and receive a designation input of a tagged object that corresponds to one of the wireless-enabled devices. The mapping module is implemented to generate a visual mapping that displays a route from the mobile device to the tagged object based on the map data. The mapping module can receive object metadata about the tagged object as the mobile device approaches the tagged object along the route of the visual mapping, and a 3D visual representation of the tagged object is displayed based on a received 3D object model of the tagged object. In aspects of visual mapping of geo-located tagged objects, a mobile device includes a mapping module that receives map data of a geo-fenced area from a controller device of the geo-fenced area. The mobile device can scan for wireless-enabled devices within a geo-fenced area ...more ...less
12
EP3446289A1
Publication/Patent Number: EP3446289A1
Publication date: 2019-02-27
Application number: 17726539.4
Filing date: 2017-03-22
13
EP3447729A1
Publication/Patent Number: EP3447729A1
Publication date: 2019-02-27
Application number: 18195372.0
Filing date: 2017-06-23
Inventor: Owechko, Yuri  
Abstract: A method of spatial localization, comprising: (i) acquiring from a sensor supported by a first vehicle of a group of vehicles, an image including a portion of a physical surface of each vehicle of a subset of the group of vehicles, from a position spaced away from the subset of the group of vehicles; (ii) identifying features in the acquired image; (iii) associating identified features in the acquired image with identified features in a three-dimensional reference representation of the subset of the group of vehicles; (iv) selecting a plurality of pairs of features in the acquired image having corresponding identified pairs of features in the reference representation; (v) for each selected feature pair in the acquired image, determining an angle of view between the features of the feature pair, and generating a three-dimensional geoarc associating the determined angle of view and the feature pair in the reference representation corresponding to the selected feature pair; and (vi) identifying locations where the geoarcs for the selected feature pairs overlap. A method of spatial localization, comprising: (i) acquiring from a sensor supported by a first vehicle of a group of vehicles, an image including a portion of a physical surface of each vehicle of a subset of the group of vehicles, from a position spaced away from the subset of ...more ...less
14
US20190080032A1
Publication/Patent Number: US20190080032A1
Publication date: 2019-03-14
Application number: 15/762,668
Filing date: 2015-11-10
Abstract: Target objects are simulated using different triangle mesh sizes to improve processing performance. To perform the simulation, a seed point for the target object within a constraint volume is determined, the seed point representing a vertex of a first triangle forming part of the target object. One or more hexagonal orbits of triangles adjacent the first triangle are propagated, whereby the hexagonal orbits of triangles form the target object. The size of each triangle is determined based upon dimensions of the target object, and the target object is generated. Target objects are simulated using different triangle mesh sizes to improve processing performance. To perform the simulation, a seed point for the target object within a constraint volume is determined, the seed point representing a vertex of a first triangle forming part of ...more ...less