top of page

Approximate Computing

Results are right, right?


Approximate computing uses imprecise logic to compute approximately-correct results.

It is more power efficient than precise logics. A naive example is that fixed-point multiplications are approximation of floating-point multiplications.



Approximate computing has been used for image processing, machine learning, deep learning, etc.



My research on approximate computing includes but not is limited to algorithm, circuit, architecture, application.

bottom of page