Growing best-of-breed tech for this decade and next...
Our inspiration comes from many sources, including many people past and present, but especially from nature and the land...preferably the highlands. We firmly believe that best of breed tech gets developed for, and tested in, the mud, rain, snow, and sun of real-world conditions, with real users, out in the field, rather than in an ivory tower lab...high-fidelity training exercises are the easiest opportunity to do this...besides, it's a great excuse to get out and enjoy!
Background
Highlands View Ranch, LLC (HVR) is a partnership dedicated to empowering individuals and teams who are working to improve everyone's abilities to achieve their own (and others') dreams for life, liberty, and the pursuit of happiness, in the best traditions of these United States of America. We are proud to be American, and support any and all other people who wish to adopt our nation's core beliefs (e.g., as outlined in our constitution). Nationality does NOT trump humanity, however. The point of establishing our nation was to ensure that we, the people (all of us, as we have come to realize finally), are assured of mutual protection of our liberties. That's a mission worth working to achieve (and to maintain), no matter the nationality of the people who make that commitment to each other.
Our approach to our part of this mission is to augment human intellect (thank you Doug Engelbart for your career, tools, and philosophy) and aid in expansion of utility of autonomous and semi-autonomous systems and embedded systems. We do this via R&D to enable artificially intelligent systems to:
① Better emulate human cognition.
② Develop the ability to better tell and "understand" stories about experiences, intentions, plans, and expectations, in order to enhance their teamwork with humans.
A useful perspective on artificial intelligence was presented in 2017 by DARPA, the US Defense Advanced Research Projects Agency, which has had more than a little impact on AI: https://www.youtube.com/watch?v=-O01G3tSYpU
Mission
Specifically, we are seeking better means to safely, sanely, ethically bring competent, trustworthy autonomy into widespread use, via our unique family of embeddable AI modules, or cognition engines, which we call our Autonomous Cognition Engine for 'n' Senses (ACE#S), think of it as sort of like a brain and spinal cord that can be embedded in various kinds of systems. We are aiming at a variety of applications in which trustworthy competent, taskable, autonomous (or semi-autonomous) intelligent systems with very high fidelity emulation of human cognitive capabilities and ethics are required:
① Enabling creation of a wide variety of lower-cost autonomous personal robots (Competent, Taskable Autonomous Robotic Systems as Teammates, or CoTARS Teammates) (or kits to build them) which team with and learn from their team leader (e.g. individual worker, fabricator, craftsman, farmer, rancher, police officer, etc.) and assigned teammates. Lightens the load of daily chores, and increases value of that team leader's heterogeneous team's higher productivity and higher quality of work, thus improving those factors for any organization employing that individual and their heterogeneous team. Applications for such bots might initially include:
(a) Safe food production (from farm to table, as farmhands enhancing precision agriculture and autonomously (or semi-autonomously) harvesting crops, ranch hands caring for and protecting livestock alone or working with livestock guardian dogs, as autonomous transport vehicles moving harvested food into/out-of warehouses and cold storage terminals, etc.).
(b) Security (e.g., various forms of sentry bots and other security forces), including improving means for heterogeneous teams to do more and do it better. We are carefully exploring idea of multi-species heterogeneous teams, incorporating inquisitive semi-autonomous mobile bots (CoTARS) laden with sensors into K9 teams. Cognitive overload of human team member(s) is a key concern in such applications, which is why trustworthy, competent, taskable autonomy is so important.
② Consumer electronics devices, such as enabling creation of competent personal tutoring devices (e.g., a Personal SYNTHetic Instructional Agent, PSYNTHIA) for children and adults, to augment learning by, and critical thinking skills of, those individuals.
③ Improving transportation solutions (e.g., autonomous (and semi-autonomous) vehicles and delivery bots).
④ Cybersecurity (e.g., cybersec agents embedded in computer servers in giant server farms to augment existing cybersecurity technologies and procedures).
Essential to these goals is enhancing abilities of these systems to build and maintain both expertise (over long haul) and situational awareness (moment by moment). This also specifically involves improving how humans, systems, teams, and organizations acquire and use information collaboratively to achieve shared and competing goals.
This approach requires improving such systems' emulation of human cognitive architecture by:
① Autonomous building, use, and maintenance of emulations of humans' mental models by embodied intelligent systems;
② Computational auditory scene analysis (CASA) to improve situational awareness, focused on how sounds (noise; e.g., footsteps, gears grinding, wheels rotating, etc.) improve understanding of processes at work in current situation;
③ Innovative machine learning approaches to enable correctable learning-by-example, -by-instruction, and -by-downloads of expertise (e.g., remember the scene from the movie The Matrix on the roof where character Trinity receives a download of expertise to fly a Bell 212 helicopter when needed to rescue Morpheus from the Agents?) and
④ Re-thinking design of System-on-Chip (SoC) memory management to better emulate capabilities of humans' associative semantic memory where relationships among chunks of memories enhance cache management (enhancing speed and efficiency of flying through semantic networks).
Acknowledgements
Much of our technical and programmatic inspiration has come from the pioneering work of many individuals, most notably: Doug Engelbart, Vannevar Bush (cf. As We May Think), J.C.R. Licklider, and, yes, even sci-fi author (and Navy engineer) Robert Heinlein. Also, the extensive collaboration at Carnegie Mellon University of Allen Newell with Herb Simon and the resultant papers, books, and presentations taught each of us much about the human cognition and emulating the human cognitive architecture. We'd like to thank them all before proceeding past this point. Additionally, one of our founding partners, Mr. Jim Hicinbothom, wishes to thank his mentor, Dr. Wayne W. Zachary, for a superbe quarter century of skills development and experience in the practice of cognitive systems engineering. And finally, we want to remember an outstanding cognitive systems and mechatronics engineer whom Mr. Hicinbothom had been mentoring for a decade, a young wife and mother, who was going to be invited to join us in founding this partnership, but who died suddenly and totally unexpectedly from complications shortly after a severe stroke: Mrs. Susan M. (Eitelman) Dean. 'Sus' was a wonder, and her friends and colleagues miss her terribly! https://www.facebook.com/InMemoryOfSweetAndSunnySusie/?_fb_noscript=1
Core Technologies & Applications
Autonomous Cognition Engine for 'n' Senses (ACE#S)
ACE#S & The "M SPACE" Model: How It Works
Background
Artificially intelligent systems are, in our experience, most capable of approaching competent autonomy when these artificial systems emulate the only known natural intelligent systems that are judged to be both sapient and sentient, which are able to think, reason, remember, feel emotions, and which are self-aware: human beings (homo sapiens). Cognition and the human mind are what we're trying to emulate here, not necessarily the biological machinery that comprise humans' brains and bodies. Artificial neural networks (e.g., DNNs, CNNs, and the many other variations upon this theme) are important tools developed in the search for means to create a general artificial intelligence (GAI, more commonly just abbreviated as AI, sometimes also labelled Good Old Fashioned AI, or GOFAI, although that is commonly used only for symbolic AI systems, and that's another long story). However, while neurological and brain sciences continue accelerating our understanding of how the embodied human brain works, humanity is still very far from truly understanding how the human mind springs from the biological systems of the human body and brain. We cannot yet get to cognition and mind by starting at the neuron, though many researchers are working hard on achieving that breakthrough. In the meantime, a lot more is known about the human mind and especially about the human cognitive architecture at a higher-level of description, the cognitive level, rather than at a lower-level of description, like the at the level of tissues, cells (neurons, etc.), and the messages they send to each other (a.k.a. the physical level of description) attempted by Artificial Neural Networks and most Statistical Learning models for AI (a.k.a. 2nd Wave of AI, or #2wAI). ACE#S is an attempt at the 3rd Wave of AI (#3wAI), Contextual Adaptation, with aspirations to set the stage for the next wave following.
Kinds of Applications
ACE#S is intended and designed to enable creation of a family of adaptable, embeddable cognition engines in the form of customizable modules suitable for use in many kinds of applications, including:
...in Consumer Electronics with embedded intelligent tutoring/training and personal assistant/aiding for their humans (e.g., our Personal SYNthetic Instructional Agent, PSYNTHIA).
...in Ground Vehicles & UGVs:
Robotic farmhand (chores, preparing soil, planting, nurturing, weeding, harvesting, monitoring growth, etc.).
Robotic ranch hand (chores, more chores including animal husbandry, monitoring: forage, feedstock, animal health/behaviour, predation, fire safety, etc.).
Transport of personnel and material over local roads and through fields.
Inside large buildings and across campuses (for transport, providing assistance/aiding, tutoring and training, etc.), for example:
Factory
Warehouse
Hospital
Educational institution (primary school, college, university)
Hotel/resort (hospitality).
Town (e.g., running errands in town, transporting between town and "home").
City (e.g., autonomous taxis, cross-town express large package delivery & courier service, numerous kinds of assisting individuals and teams, etc.).
Interstate transport (long-distance courier service, trucking, tugboats managing barges on inland waterways, etc.).
Ports
Seaports and inland waterway ports (e.g., movement of personnel and containerized goods)
Airports (e.g., movement of passengers, personnel, baggage and other materials)
Spaceports
Border patrol, complex perimeter patrol, and similar 24/7 security and safety monitoring tasks.
...in Ships & Other Watercraft:
Surface ships, smaller boats & USVs
Submarines & UUVs.
...in Aircraft & UAVs:
Fixed wing
Rotary wing
Lifting body.
...in Spacecraft:
Launch vehicles (to safely carry people, other vehicles & materials up a planet's gravity well into space)
Reentry vehicles (to safely carry people, other vehicles & materials down a planet's gravity well from space)
Satellites (to maintain safe orbit and enable monitoring: crops, pests, blights, droughts, severe weather and other economically important phenomena like wildfires, etc.)
Space stations
Transporters inside larger stations
Maintenance bots and semi-autonomous waldos (to help humans do remote maintenance tasks, etc.)
Lab assistant bots (to aid researchers with mundane chores and continual monitoring tasks in support of research and development programs)
Prospector vehicles (to seek useful resources throughout our solar system, exo-planetary, hunting asteroids and resources on smaller moonlets)
Planetary rovers (to explore and seek useful resources on planetary bodies and their larger moons in our solar system, to safely transport materials or people between installations on, or under, the surface).
|