5th International Symposium on Big Data, Deep Learning & Advanced Predictive Analytics, November 5
Presentation: AI Robotics – SLAM Status Quo and Future Directions
Simultaneous Localization and Mapping (SLAM) consists of the concurrent construction of a model of the environment (labeled as the map) and the estimation of the state of the robot moving within it. The past decade has seen major SLAM advancements and new SLAM algorithms. The bulk of work has focused on improving the computational efficiency while ensuring consistent and accurate estimates for the map and the vehicle pose. Further, some research emphasis has been on issues such as non-linearity, data association and landmark characterization, all of which are vital to move a practical and vigorous SLAM implementation forward. It is safe to state that we are at a point with SLAM where we can enable large-scale real-world SLAM applications and witness a steady transition of the technology into the main-stream industry. The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle very large numbers of landmarks that may be present in real life scenarios. Kalman filter-based algorithms (as an example) require time that is quadratic to the number of landmarks to incorporate each sensor observation and hence scalability is a concern. In this talk, we analyze the current state of SLAM and consider future directions. By considering the status quo, we describe open research challenges and new research opportunities that deserve watchful scientific investigation. One of the topics discussed in the talk revolves around the impact that deep learning will or should have on SLAM. Actual demonstrations will support the discussion.
Presentation Plus LIVE DEMO: Management of The Analytic Lifecycle for Big Data
Speaker: Alain Biem, PhD,is a Data scientist and former vice president of Analytics and chief scientist of advanced solutions delivery at Opera Solutions
The Analytic Lifecycle involves building, deploying, and maintaining a variety of analytic models, on a variety of computing platforms, for a variety of tasks. The Management of the Analytic Lifecycle for Big Data, at rest or in motion, is a challenging endeavor requiring the delicate utilization and leveraging of various Big Data platforms and software assets, as data evolve. In this presentation, we describe the management of Big Data Analytics lifecycle as an essential part of the data lifecycle and as a pre-
More Speakers to be added
Big Data (Hadoop) Data & Systems Modeling
by Dominique Heger, PhD, Chief Executive, DHTechnologies, Texas, USA
The term Big Data highlights high volumes of data. What describes high data volumes for an organization basically dependent on the organization and its (data) history itself. In a nutshell, data management in an organization is focused on delivering data to the appropriate data consumers (people and/or applications) in the most effective and efficient manner as possible. The goal of data quality and data governance is trusted data. The objective of data integration is available data or in other words, delivering the data to the consumers in the proper format. Big Data data & systems modeling can aid in all these aspects. Modeling can be described as the process of creating a simpler, flexible, mathematical representation of a system that may or may not yet exist. Modeling can be used as a powerful communication tool among the technical and business stakeholders and consumers of any (data) project. One of the major aspects of modeling is that the technique is not limited to describing a system solely as the system itself, but various modeling based cases (sensitivity studies) can be conducted to communicate different aspects of possible workload and configuration scenarios.
In any design that involves the movement of data among systems, it is paramount to specify the lineage in the flow of data among the physical data structures, including the mapping and transformation rules necessary to accomplish the project’s goals. This level of design requires an understanding of both, the physical implementation as well as the business implication of the data. Data and systems modeling is further used to design data structures at various levels of abstraction from the conceptual to the physical stage. While differentiating between modeling and design, the focus is normally on distinguishing between the logical design and design closer to the physical implementation. Hence, data and systems modeling is basically a necessity for any design. During the Big Data Symposium, actual Hadoop (MapReduce) and Data Analytics models will be presented to further highlight the importance of modeling in any Big Data project.
(1) Linux Performance Optimizations for Big Data Environments. Read More.
(3) Hadoop Ecosystem, Mapreduce Framework and the IT Challenges. Read More.
(4) Business Analytics and the Big Data. Read more
(5) Business Analytics –
(7) Before everyone goes predictive analytics ballistic.Read More.
(8) Small and medium-