I am an associate professor in the Department of Electrical Engineering at the University of Concepción, where I am a member of the Digital Systems Group. I received my Ph. D. from the University of Washington, at the Department of Computer Science and Engineering.
Teaching (in Spanish)
First Semester
- Computer Architecture (543.426)
- Digital VLSI Design (543.758)
Second Semester
- Advanced Digital Systems Design (543.759)
- Introduction to Artificial Neural Networks (543.719)
Current Undergraduate Students
- Pablo Angulo
- Nataniel Fuentes
- Ricardo Galleguillos
Current Graduate Students
- Gonzalo Carvajal
- Waldo Valenzuela
Research
Analog and Mixed-Signal Adaptive Systems in VLSI
My main research interests lie in
the implementation of large-scale adaptive signal processing and neural network
algorithms in analog and mixed-signal VLSI. Analog VLSI can provide good
performance at very low die area and power dissipation compared to similar
digital implementations. However, analog VLSI systems exhibit poor arithmetic
performance due to problems such as device mismatch, nonlinear circuits, and
charge leakage. We use new silicon devices, called synapse transistors,
to accurately store and update a nonvolatile
analog value on a chip, represented by electrical charge on the
floating gate of a pFET. This analog value can be directly used as a
high-resolution adaptive coefficient in a filter or neural network, and
it can
also be used to calibrate analog circuits on chip to compensate for the
effects
of device mismatch. These devices are also immune to the charge leakage
and charge injection problems common in VLSI capacitors. Thus, we can
use synapse transistors to implement analog and mixed-signal adaptive
systems in VLSI at very low cost, which exhibit a performance similar
to much
larger and power-hungry digital systems of moderate resolution (e.g. 10
bits).
A secondary line of research
currently under development is the design and implementation of reconfigurable
systems in VLSI for compute-intensive signal processing tasks. This includes
mapping algorithms to current commercial FPGAs, the design of new architectures
for efficient reconfigurable computing, and the integration of the
previously-discussed analog-VLSI adaptive systems and reconfigurable digital
logic.
- Gonzalo Carvajal, Miguel Figueroa and Seth
Bridges, “Effects of Analog-VLSI
Hardware on the Performance of the LMS Algorithm,” in Proceedings of the 2006 International
Conference on Neural Networks (ICANN), published by Springer-Verlag Lecture Notes in
Computer Science, No. 4131, pp. 963-973, Athens,
Greece,
September 10-14, 2006. [pdf]
Device mismatch, charge leakage and
nonlinear transfer functions limit the resolution of analog-VLSI arithmetic
circuits and degrade the performance of neural networks and adaptive filters
built with this technology. We present an analysis of the impact of these
issues on the convergence time and residual error of a linear perceptron using
the Least-Mean-Square (LMS) algorithm. We also identify design tradeoffs and
derive guidelines to optimize system performance while minimizing circuit die
area and power dissipation.
- Miguel Figueroa, Esteban Matamala,
Gonzalo Carvajal and Seth Bridges, “Adaptive
Signal-Processing in Mixed-Signal VLSI with Anti-Hebbian Learning,” in
Proceedings of the 2006 IEEE Computer
Society Annual Symposium on VLSI, pp. 133-138, Karslruhe, Germany, March
2-3, 2006. [pdf]
We describe analog and mixed-signal
primitives for implementing adaptive signal-processing algorithms in VLSI based
on anti-Hebbian learning. Both on-chip calibration techniques and the adaptive
nature of the algorithms allow us to compensate for the effects of device
mismatch. We use our primitives to implement a linear filter trained with the
Least-Mean Squares (LMS) algorithm and an adaptive decorrelation network that
improves the convergence of LMS. When applied to an adaptive Code-Division
Multiple-Access (CDMA) despreading application, our system, without the need
for power control, achieves more than a 100x improvement in the bit-error ratio
in the presence of high interference between users. Our 64-tap linear filter uses 0.25mm2
of die area and dissipates 200μW in a 0.35μm CMOS process.
- Miguel Figueroa, Seth Bridges and Chris Diorio, “A 19.2 GOPS Mixed-Signal Filter with Floating-Gate Adaptation,” IEEE
Journal of Solid State Circuits, Vol. 39 No. 7, pp.
1196-1201, July 2004. [pdf]
We have built a 48-tap, 200MHz,
mixed-mode adaptive FIR filter with 8-bit input and 10-bit output resolution.
The filter stores its tap weights in nonvolatile analog memory cells with
linear updates, and adapts using the least-mean-square (LMS) algorithm. We run
the input through a digital tapped delay line, multiply the digital words with
the analog tap weights using mixed-mode multipliers, and use pulse-based
adaptation to set the tap coefficients. The LMS signal-path resolution exceeds
13 bits. The total die area is 2.6mm2 in a 0.35mm CMOS process. The filter consumes 20mW with a
6mA differential output current. We can readily scale the design to higher resolutions
and longer delay lines.
- John Hyde, Todd Humes, Chris Diorio,
Mike Thomas and Miguel Figueroa, “A 300 MS/s, 14-bit, Digital-to-Analog Converter in Logic CMOS,” IEEE
Journal of Solid State Circuits, Vol. 38 No. 5, pp. 734-740, May 2003. [pdf]
We describe a floating-gate trimmed
14-bit 300-MS/s current-steered digital-to-analog converter (DAC) fabricated in
0.25- and 0.18- m CMOS logic processes. We trim the static integral nonlinearity
to 0.3 least significant bits using analog charge stored on floating-gate
pFETs. The DAC occupies 0.44 mm2 of die area, consumes 53 mW at 250
MHz, allows on-chip electrical trimming, and achieves better than 72-dB
spur-free dynamic range at 250 MS/s.
- Chris Diorio, David
Hsu and Miguel Figueroa, “Adaptive CMOS: From Biological Inspiration to Systems on a Chip”, Proceedings
of the IEEE, Vol. 90, No. 3, pp. 345-357, March 2002. [pdf]
Local long-term adaptation is a
well-known feature of the synaptic junctions in nerve tissue. Neuroscientists
have demonstrated that biology uses local adaptation both to tune the
performance of neural circuits and for long-term learning. Many researchers
believe it is a key to the intelligent behavior and the efficiency of
biological organisms. Although engineers use adaptation in feedback circuits
and in software neural networks, they do not use local adaptation in integrated
circuits to the same extent that biology does in nerve tissue. A primary reason
is that locally adaptive circuits have proved difficult to implement in
silicon. We describe complementary metal-oxide-semiconductor (CMOS) devices
called synapse transistors that facilitate local long-term adaptation in silicon.
We show that synapse transistors enable self-tuning analog circuits in digital
CMOS, facilitating mixed-signal systems-on-a-chip. We also show that synapse
transistors enable silicon circuits that learn autonomously, promising
sophisticated learning algorithms in CMOS.