We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Birds, fish, insects and other creatures have long been known to swarm or “murmur,” — which means on occasion they will move and coordinate as a group, rather than as directed by a centralized leader.
This age-old practice throughout the nature of protection-by-confusion has given rise to the emerging concept of “swarm learning” – what some are touting as the next era in AI innovation.
When applied to intelligent devices operating in the real world, “swarm learning” refers to decentralization. This means that users can share learnings at the edge, or at distributed sites, without moving or exposing data.
The benefits? Improved accuracy, predictability, flexibility, security; less bias, quicker answers, greater opportunities for shared and global learning.
“Swarm learning is an important movement in the AI market, with broad support across the public and private sectors, to combine the power of expanding data sets with the innovation and insights from organizations across the globe,” said Justin Hotard, executive vice president and general manager of HPC and AI at Hewlett Packard Enterprise (HPE).
HPE looks to jump the swarm
Staking an early claim in “the next gold rush for machine intelligence,” HPE today announced the launch of HPE Swarm Learning. This privacy-preserving, decentralized machine learning (ML) framework for the edge and distributed sites was developed by its R&D organization Hewlett Packard Labs.
The tool is designed to provide users with containers that are easily integrated with AI models via the HPE swarm API. AI model learnings can then immediately be shared both inside an organization and with industry peers on a global scale. As Hotard explained, this can improve training and foster collaboration without the actual sharing of data or the unnecessary movement of data.
HPE Swarm Learning enables training at the edge with the control, privacy and security afforded by blockchain technology, Hotard said. Permissioned blockchain removes the need for a central custodian peer-to-peer network. Members can be onboarded, leaders can be elected and model parameters merged to provide resilience and security.
By only sharing learnings, the tool allows users to leverage large training datasets without compromising privacy. This also helps remove bias and increase model accuracy – ultimately accelerating insights at the edge, Hotard said.
Traditionally, AI model training has taken place at a central location. Data is shuttled back and forth from the edge to a central data center for analysis. Results are then delivered back to the edge.
But this back-and-forth system can be slow, expensive and insecure, Hotard said. It’s inefficient to move large volumes of data back and forth from the same source and the method is becoming less and less supportive of devices that are growing increasingly more intelligent by the day. Data privacy and ownership rules and regulations also limit data sharing and movement.
All of this can ultimately lead to biased, even inaccurate, models.
Instead, by training models and harnessing insights at the edge, businesses can make decisions faster, at the point of impact, resulting in better outcomes. Dataset sizes can be increased, ML models can learn more equitably and data governance and privacy are preserved.
Because intelligent devices process data at the source – credit card transactions in real-time, for example – that data is only truly valuable when it’s shared and turned into a collective understanding, Hotard said. On a grander scale, sharing learnings between organizations at the very source of data has broader implications for business and society.
For example, when able to share fraud-related learnings with numerous financial institutions at once, banking and financial services institutions can fight the expected global loss of more than $400 billion in credit card fraud over the next decade.
Hotard pointed to a use case with TigerGraph, a provider of graph database and graph analytics software. The company combined HPE Swarm Learning with its data analytics platform to augment efforts in detecting unusual activity in credit card transactions. This helped increase accuracy when training ML models from vast quantities of data from numerous banks across multiple locations, Hotard said.
Manufacturers have also applied swarm learning to predictive maintenance by collecting learnings from sensor data across multiple manufacturing sites. In healthcare, meanwhile, the technique has been applied to accelerating colon cancer diagnosis. This by overcoming challenges in data privacy and ownership and operational inefficiencies — that is, medical images are large and duplicating them is often simply out of the question. Having more data and more diverse data has resulted in greater accuracy in disease classification, Hotard said.
Swarming for the greater AI good
All told, the hope is that the swarm learning concept will foster “AI for the greater good” by encouraging collaboration across organizations and around the globe, Hotard said. He added that it’s a mission of HPE is to make AI more heterogeneous by removing complexities of ML development and enabling ML engineers to build models at greater scale.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Source: Read Full Article