Author: Ben Brubaker
Compiled by Synced Review
We've all had similar experiences: running a program, and the computer suddenly freezes, at best requiring file recovery, at worst, recreation; or frequent "memory full" warnings on our phones, forcing us to painfully delete precious photos or apps.
These everyday frustrations all point to two crucial fundamental elements in the world of computing: time and space.
Time and space (also known as memory) are the two most fundamental resources in computing: any algorithm requires a certain amount of time to execute and occupies a certain amount of space to store data during its operation.
Previously, for certain tasks, algorithms were known to require space roughly proportional to their running time, and researchers widely believed that this could not be improved upon.
The latest research by MIT theoretical computer scientist Ryan Williams establishes a mathematical procedure that can transform any algorithm — regardless of the specific task it performs — into a form that occupies significantly less space, proving that a small amount of computational memory (space) is theoretically more valuable than a large amount of computational time. This overturns nearly 50 years of computer scientists' understanding.
Paper Title: Simulating Time With Square-Root Space
Paper Address: https://arxiv.org/pdf/2502.17779
More importantly, this result not only reveals the range of computations that can be performed under specific space constraints but also indirectly proves the types of computations that cannot be completed within a finite amount of time. Although the latter was long anticipated, a rigorous proof method was lacking.
50 Years of Exploration and Bottlenecks
Juris Hartmanis
In 1965, Juris Hartmanis and Richard Stearns collaborated on two groundbreaking papers, which for the first time established rigorous mathematical definitions for the concepts of "Time" and "Space."
Paper Address: https://doi.org/10.1090/S0002-9947-1965-0170805-7
These definitions provided researchers with a common language, enabling them to compare these two types of resources and, based on them, categorize problems into different complexity classes.
One of the most important complexity classes is P, which, roughly speaking, includes all problems that can be solved within a reasonable amount of time. A corresponding space complexity class is called PSPACE.
The relationship between these two classes is one of the core problems in complexity theory.
All problems belonging to class P also belong to class PSPACE, because fast algorithms typically do not have enough time to use large amounts of computer memory space during runtime. Conversely, if all PSPACE problems could also be solved by fast algorithms, then the two classes would be completely equivalent: computational time and computational space would have no essential difference in capability.
However, complexity theorists generally believe that the PSPACE class is much larger and contains many problems that do not belong to the P class. In other words, they believe that, from a computational capability perspective, space is a much more powerful resource than time. This belief stems from the fact that algorithms can repeatedly use the same small block of memory, whereas time cannot be reused—once past, it cannot be recovered.
However, complexity theorists are not satisfied with this intuitive reasoning: they require rigorous mathematical proof. To prove that the PSPACE class is indeed strictly greater than the P class, researchers must be able to demonstrate the existence of certain problems within PSPACE that are inherently impossible to solve with fast algorithms.
In 1975, John Hopcroft, Wolfgang Paul, and Leslie Valiant designed a universal "simulation program" that proved that any task completed within a specific amount of time could be accomplished using slightly less space than that time. This was the first significant step in connecting time and space, indicating that space is at least slightly stronger than time.
However, subsequent research stalled, and complexity theorists began to suspect that they might have encountered a fundamental obstacle.
The problem lay in the "universality" feature of the simulation method proposed by Hopcroft, Paul, and Valiant. While many problems could indeed be solved in far less space than their time budget, some problems, intuitively, seemed to require almost as much space as time. If this were truly the case, then a more efficient space-saving universal simulation would be out of the question.
Soon after, Paul, along with two other researchers, proved this point: a more efficient universal simulation is indeed impossible, provided that a seemingly self-evident premise is adopted—that different data blocks cannot simultaneously occupy the same memory space at any given moment.
Paul's research results indicated that to truly solve the relationship between P and PSPACE (P versus PSPACE problem), it would be necessary to completely abandon the simulation-centric research path and instead seek an entirely new theoretical method. The problem was that no one at the time could propose a viable alternative.
This research conundrum thus entered a deadlock that lasted for fifty years—until Williams's work finally broke this stalemate.
Breaking the Stalemate
Williams's new research stems from a breakthrough in another computational memory usage problem: which problems can be solved with extremely limited space?
In 2010, complexity theory pioneer Stephen Cook and his collaborators designed a new task called the tree evaluation problem and proved that any algorithm constrained by a space budget below a certain threshold could not solve this problem.
However, there was a flaw in this proof. Its reasoning relied on an intuitive assumption made by Paul and others decades earlier: algorithms cannot store new data in already occupied memory space.
For over a decade thereafter, complexity theorists had been trying to bridge this gap. Until 2023, Stephen Cook's son James Cook and researcher Ian Mertz overturned this assumption. They designed a new algorithm capable of solving the tree evaluation problem with significantly less space overhead than previously thought. This result rendered the original lower bound proof entirely invalid.
Cook (left) and Mertz (right)
In Stephen Cook's original proof assumption, information bits were treated as "pebbles" that had to be stored in different locations in the algorithm's memory. However, it turned out that data storage is far more flexible than that.
Williams's Revolutionary Leap
The algorithm proposed by Cook and Mertz attracted considerable interest from researchers, but it was initially unclear whether it applied to scenarios other than the tree evaluation problem.
Ryan Williams
In the spring of 2024, a group of students in a class taught by Ryan Williams presented Cook and Mertz's paper as their final project. Their enthusiasm sparked his interest, leading him to delve deeper into the work.
Once he started, he quickly grasped a key idea: he realized that the method proposed by Cook and Mertz was essentially a universal space compression tool. He thought: why not use this tool to design a new universal simulation mechanism that links time and space complexity in a more optimal way? Like the model constructed by Hopcroft, Paul, and Valiant back then, but with stronger performance.
That classic result provided a way to transform any algorithm with a given time budget into a new algorithm with a slightly smaller space budget. Williams realized that if simulation techniques were built upon "squishy pebbles," the space required by the transformed new algorithm would be reduced much more significantly—roughly equal to the square root of the initial time budget.
This new type of space-saving algorithm would operate significantly slower, making it unlikely to have practical applications. However, from a theoretical perspective, its significance is nothing short of a revolutionary breakthrough.
Williams's simulation method extends an existing concept—"block-respecting Turing machine simulation." The basic idea is to break down the entire computation process (assuming a total of t computational steps) into t/b consecutive "computation blocks," with each block containing b computation steps.
The input/output states (or "configurations") of these "computation blocks" have dependencies, which can form a "computation graph."
Williams's crucial step is to reduce the Turing machine's computation problem within t steps—especially determining its final state or output—into an instance of a "Tree Evaluation Problem" (TEP).
This constructed instance of the tree evaluation problem has specific parameters: the height h of the tree is approximately t/b (i.e., the number of computation blocks), the bit length of information passed by each node is b, and the fan-in (how many child nodes each node has) is d (a small constant dependent on the Turing machine itself).
Importantly, this "tree" is "implicitly defined," meaning there's no need to actually construct the entire tree in memory; instead, there's a set of rules that can determine what any part of the tree should look like at any time.
For this constructed "tree evaluation problem" instance, Williams applied the algorithm proposed by Cook and Mertz. The Cook-Mertz algorithm's space complexity for solving such tree evaluation problems is approximately d^(h/2) * poly(b, h) (where d is the fan-in, h is the tree height, and b is the bit length).
Williams then analyzed the total space complexity and optimized it by carefully selecting the size of the "computation blocks," b. When the parameter b is set to approximately √t (the square root of the total computation time t), the aforementioned tree height h (approximately t/b) becomes approximately √t.
Substituting these into the Cook-Mertz algorithm's space complexity formula (especially the d^(h/2) term), and combining with other factors (such as the log t factor, which comes from recording pointers, counters, etc.), the total simulated space complexity is ultimately derived as O(√t log t).
References:
https://www.quantamagazine.org/for-algorithms-a-little-memory-outweighs-a-lot-of-time-20250521/
https://arxiv.org/pdf/2502.17779
© THE END
Please contact this public account for authorization to reproduce.
Submissions or inquiries for reporting: liyazhou@jiqizhixin.com