The startup from MIT’s CSAIL says its Liquid Foundation Models have smaller memory needs thanks to a post-transformer architecture.
View Article on VentureBeat
AI,Business,AI, ML and Deep Learning,Conversational AI,generative pre-trained transfomer,Generative Pre-trained Transformer 3,GPT,liquid ai,liquid neural networks,LNN,LNNs,MIT,MIT CSAIL,NLP,Transformers
MIT