2023 Interposers: TSMC Hints at 3400mm2 + 12x HBM in one Packageby Andrei Frumusanu on August 25, 2020 4:00 PM EST
- Posted in
- TSMC Tech Day 2020
High-performance computing chip designs have been pushing the ultra-high-end packaging technologies to their limits in the recent years. A solution to the need for extreme bandwidth requirements in the industry has been the shifts towards large designs integrated into silicon interposers, directly connected to high-bandwidth-memory (HBM) stacks.
TSMC has been evolving their CoWoS-S packaging technology over the years, enabling designers to create bigger and beefier designs with bigger logic dies, and more and more HBM stacks. One limitation for such complex designs has been the reticle limit of lithography tools.
Recently, TSMC has been increasing their interpose size limitation, going from 1.5x to 2x to even projected 3x reticle sizes with up to 8 HBM stacks for 2021 products.
As part of TSMC’s 2020 Technology Symposium, the company has now teased further evolution of the technology, projecting 4x reticle size interposers in 2023, housing a total of up to 12 HBM stacks.
Although by 2023 we’re sure to have much faster HBM memory, a 12-stack implementation with the currently fastest HBM2E such Samsung's Flashbolt 3200MT/s or even SKHynix's newest 3600MT/s modules would represent at least 4.92TB/s to 5.5TB/s of memory bandwidth, which is multitudes faster than even the most complex designs today.
Carousel image credit: NEC SX-Aurora TSUBASA with 6 HBM2 Stacks
- TSMC Details 3nm Process Technology: Full Node Scaling for 2H22 Volume Production
- TSMC To Build 5nm Fab In Arizona, Set To Come Online In 2024
- TSMC & Broadcom Develop 1,700 mm2 CoWoS Interposer: 2X Larger Than Reticles
- TSMC Boosts CapEx by $1 Billion, Expects N5 Node to Be Major Success
- Early TSMC 5nm Test Chip Yields 80%, HVM Coming in H1 2020
- TSMC: 5nm on Track for Q2 2020 HVM, Will Ramp Faster Than 7nm
- TSMC: N7+ EUV Process Technology in High Volume, 6nm (N6) Coming Soon
Post Your CommentPlease log in or sign up to comment.
View All Comments
jeremyshaw - Tuesday, August 25, 2020 - linkHow will this scale vs wafer sized SRAM?
azfacea - Tuesday, August 25, 2020 - linkits not just intel TSMC is destroying, its us. humans are finished. in 5 years an xbox could be a "deeper mind" than a human. elon warned us. no one listened.
Drkrieger01 - Tuesday, August 25, 2020 - linkJust remember, this is only the hardware portion to said equation. We still need developers to release products that can harness this power... for good or evil ;)
azfacea - Tuesday, August 25, 2020 - linkare you suggesting training will take a long time ?? whats to stop a 1 GW super computer doing the training and programming ??
we r years not decades away from being no more useful than monkeys. maybe thats a good thing, maybe its not. maybe it means infinite prosperity and well being for everyone maybe it means we'll be cleansed out and deemed too dangerous but we are for sure not going to be useful anymore.
Dizoja86 - Tuesday, August 25, 2020 - linkPut down the ayahuasca, friend. We're still a long way away from a technological singularity. Listening too seriously to Elon Musk might be part of the problem you're facing.
Spunjji - Wednesday, August 26, 2020 - linkThe funniest bit is that Elon hasn't even said anything new - he's just repeating things other people were saying a long time before him.
If he ever turns out to have been right, it will be incidentally so. A prediction isn't any use at all without a timeline.
Santoval - Wednesday, August 26, 2020 - linkExactly. Others like Stephen Hawking started sounding the alarm quite earlier.
edzieba - Wednesday, August 26, 2020 - linkComputers are very dumb, very quickly. 'Deep Learning' is very dumb, in vast parallel.
While the current AI boom is very impressive, it is fundamentally implementing techniques from many decades ago (my last Uni course was a decade ago and the techniques were decades old THEN!) but jsut throwing more compute power at them to make them commercially viable. The problem is always how to train your neural networks, and 'Deep Learning' merely turned that from a tedious, finicky and slow task to a merely tedious and finicky one.
Or in other words: if you want your kill-all-humans skynet AI, you're going to have to find someone who wants to make a robust and wide coverage killable-human-or-friendly-robot training dataset, and debug why it decides it wants to destroy only pink teapots.
azfacea - Wednesday, August 26, 2020 - linkso you are saying what evolution did in humans was impossible to do because it should've never gone past pink teapots ?? got it.
who cares if techniques were old if you lacked processing power to use them. ramjets were conceived of 50 years before the first turbojet flew.
melgross - Wednesday, August 26, 2020 - linkYou don’t understand how this works. Life isn’t science fiction. While some things are just steady in their progress, such as electronic and mechanical systems, until we have a far better understanding of how our brain works, we won’t be able to have a machine equal it. Processing speed and scads of memory aren’t enough. Even neural networks aren’t close to being enough.