Despite exponential improvements in computers, modern science and innovation requires capabilities on a scale like we never imagined. It’s time for some big ideas to shine.

Computing finds pervasive use in the modern world, from basic communications to designing the most advanced aerospace systems. If data is the new oil, then powerful, affordable, and ubiquitous computing systems are the engines that turn data’s raw potential into something useful.

Even though the capabilities of computers have advanced exponentially in the last few decades, the problems they are being pressed to solve have grown even faster. There are now billions of devices generating data that must constantly be analyzed, and scientific problems that demand hundreds of thousands of CPUs at once. To solve these problems at a reasonable cost, everyone from manufacturing engineers to astrophysicists are drawing on the field of Advanced Computing.

What is it?

Advanced Computing is an umbrella category that refers to the hardware, capabilities, and processes used in highly complex, demanding IT systems. Among the most well-known topics it covers are high-performance and high-throughput computing (HPC & HTC) systems at the apex of the computing world.

Both HPC and HTC allow users to apply hundreds, thousands, or even millions of cores to a specific problem. The CPUs and GPUs in these systems are very powerful and able to quickly solve problems that would take ordinary computers many, many years. There are also several other sub-fields under Advanced Computing which may complement HPC and HTC:

  • Distributed Computing: A model where different components like compute, networking, and storage are shared among many physically different machines.
  • Edge Computing: A distributed computing architecture which involves data being analyzed as close to its source as possible, to avoid unnecessary costs and latency.
  • Hybrid Computing: A model that combines computing systems owned by an enterprise itself with cloud-based resources, for a hybrid architecture.
  • Composability: A paradigm that takes virtualization to the next level by intelligently matching workloads with the right compute resources, all without user management.

What are the challenges?

In addition to a whole new class of problems that are pushing the limits of today’s most powerful HPC and HTC systems, there are other major challenges facing the field of Advanced Computing:

  • Sustainability: Today’s data centres require as much power as a small city. As demand for computational power continues to grow, it is clear that servers must become more energy efficient and attached to renewable power supplies.
  • Specialization & Interoperability: HPC and HTC systems are very complex and difficult to manage. The new generation of tools must cut through this complexity and allow non-specialists to have a functional, reliable, and secure experience.
  • Distributed Data Generation: Data is being generated in massive volumes, and not all of it can be analyzed in-depth. On top of this, regulations and data privacy will require future applications to account for where their information lives.
  • Heterogeneous Requirements: Organizations may run hundreds of applications at once, each with its own unique footprint and infrastructure demands. Computing systems must be able to adapt in real-time to changing demands, all without affecting performance.
  • A Cyberphysical System-of-Systems: Practically every system we interact with today is controlled by a computer. As machines begin to coordinate by sharing data at scale, future computing engines must secure these Cyberphysical systems from attack.
The NSF-funded Frontera supercomputer at the Texas Advanced Computing Center was launched on September 03, 2019 as the fastest supercomputer at any university and the fifth most powerful system in the world.

What opportunities are available in South Carolina?

Leading companies such as Ericsson and Hewlett-Packard support the view that science and industry will move to a more composable, pervasive, and interconnected ‘fabric’ of compute. This software will allow any application to leverage any infrastructure, from the cloud to the edge, in a completely seamless way.

Kings Distributed Systems is at the forefront of building this networked compute fabric. Founded in 2017, the company is developing a completely network- and hardware-agnostic engine called the Distributed Compute Protocol (DCP). Unlike today’s disconnected systems, DCP creates a uniform platform for sharing and consuming computational resources from any computer hardware. It allows an infinitely complex range of applications to run in the centralized cloud, the distributed edge, and everywhere in-between.

DCP throws the door open to a truly connected global computing fabric. From Web 3.0 to smart manufacturing and beyond, it provides the base layer for all kinds of hitherto impossible applications.

Besides Kings Distributed Systems, there are a significant number of companies pursuing innovations to drive Advanced Computing forward. Companies like SambaNova and Graphcore are developing more efficient hardware for specialized applications like AI, while others like Snowflake are building the next generation of database software. Others still like Liqid are working on making computational resources more flexible, while the drive by companies into edge computing picks up steam.

It is clear that the push to digital brought by COVID-19, an ever growing adoption of tools like AI and data science, and an ever greater flow of VC funding will not be slowing down any time soon. Whatever else may happen in the next few years, the future of Advanced Computing is bright.

Who is in the industry in South Carolina?

Does your company work with advanced computing in South Carolina? Please click here to reach out to SCETA and be highlighted.

Comments are closed.