nexRI Logo

NexRI Vision

We believe in a future where AI is not just predicting, but truly understanding and adapting. By rethinking the foundations of intelligence, we are shaping a new era of AI by moving beyond probabilistic models and creating entirely new paradigms for intelligence, inspired by human cognition.

About Our Approach

Since 2019, our team has been developing a groundbreaking mathematical framework and architectural solutions—from hardware to software—that leverage the human ability to structure, fill spaces, and generalize patterns. Instead of relying on statistical prediction like traditional Large Language Models (LLMs), our technology is designed to understand and compose complex structures, enabling AI to think and adapt in a way that mirrors human reasoning.

About Our Approach Image

Scientific Background

The Block Design Test as a Measure of Intelligence: A Critical Review

Authors: Kiley McKee, Danielle Rothschild, Stephanie Ruth Young, David H Uttal

Summary: This article critically examines the Block Design Test, emphasizing its importance in assessing visuospatial and constructive abilities. The authors discuss the need to consider cultural and educational factors when interpreting test results.

Link: PMC11204419

Visuospatial Reasoning Abilities in Children: Assessing the Role of Shape Matching Tasks

Authors: G D Aurizio, I Di Pompeo, N Passarello, E Troisi Lopez, P Sorrentino, G Curcio, L Mandolesi

Summary: This study investigates how shape-matching tasks can evaluate visuospatial reasoning skills in children. Findings suggest that these tasks effectively identify developmental stages of these abilities.

Link: PMC9936130

Neural Correlates of Object and Shape Matching: An fMRI Study

Authors: Seunghwan Cha, James Ainooson, Eunji Chong, Isabelle Soulieres, James M. Rehg

Summary: Utilizing functional MRI, this research identifies brain regions activated during object and shape-matching tasks, providing insights into the neural mechanisms underlying these cognitive processes.

Link: PDF

Cultural Influences on Shape-Sorting Task Performance in Early Childhood

Authors: Zaid Alkouri

Summary: This paper explores how cultural differences affect children's performance in shape-sorting tasks, highlighting the significance of cultural context in assessing cognitive abilities related to shape matching.

Link: https://doi.org/10.1080/2331186X.2022.2083471

Developmental Trajectories of Shape Matching Skills in Preschoolers

Authors: Solmaz Soluki, Samira Yazdani, Ali Akbar Arjmandnia, Jalil Fathabadi, Saeid Hassanzadeh

Summary: This research traces the development of shape-matching skills in preschool children, identifying key stages and factors influencing these abilities. The authors suggest that early interventions can support the development of visuospatial intelligence.

Link: PDF

AdderNet: Do We Really Need Multiplications in Deep Learning?

Authors: Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

Summary: This paper introduces AdderNet, a neural network where multiplication operations in convolutional layers are replaced with addition, significantly reducing computational costs while maintaining high model accuracy.

Link: https://arxiv.org/abs/1912.13200

Universal Adder Neural Networks

Authors: Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Chunjing Xu, Tong Zhang

Summary: This study explores the theoretical foundations of AdderNet, demonstrating that such networks are universal function approximators, confirming their potential as an alternative to traditional multiplication-based neural networks.

Link: https://arxiv.org/abs/2105.14202

A Differentiable Transition Between Additive and Multiplicative Neurons

Authors: Wiebke Köpp, Patrick van der Smagt, Sebastian Urban

Summary: This paper introduces a parameterizable transition function that enables neurons to smoothly switch between additive and multiplicative operations, allowing this decision to be integrated into standard backpropagation.

Link: https://arxiv.org/abs/1604.03736

Exploring the Approximation Capabilities of Multiplicative Neural Networks for Smooth Functions

Authors: Ido Ben-Shaul, Tomer Galanti, Shai Dekel

Summary: This research analyzes the approximation capabilities of neural networks with multiplicative layers, showing that they can more efficiently approximate smooth functions compared to traditional ReLU-based networks.

Link: https://arxiv.org/abs/2301.04605

About Our Approach Image

Our Technology

Current mathematical approaches in AI rely heavily on the multiplication and addition of weights and biases—operations that significantly differ from the way the human brain processes information. In fact, the human brain operates in different domains; beyond amplitude, it also processes frequency and phase. Moreover, it utilizes a wide variety of in-neuron operators, enabling not only scaling through multiplication but also rotation, translation, and mirroring of signals — going far beyond simple transformations.

Our breakthrough lies in utilizing formulas that not only enhance the expressiveness of neural network layers but also drastically reduce the energy required for elementary unit processing. In other words, there is no point in developing highly efficient yet complex formulas if they simultaneously increase hardware demands.

As an example, on the top, a DNA helix separation problem is presented—solved using the conventional Perceptron approach, labeled (P), and compared to the innovative Q-Base function approach on the right, labeled (Q+). The advantages of Q-Functions are immediately apparent, as their effectiveness stems from a smoothed and continuous boundary surface, fundamentally different from traditional multiplication, activation function planes, and sharp edges. This approach significantly reduces the number of multiplications required in a layer while maintaining high accuracy and unlocks new pathways for model and layer quantization far surpassing conventional bit-depth reduction.

About Our Approach Image

Ongoing Advancements

Our Patents

Binary to ternary convertor for multilevel memory

Patent Number: US12119064B2

Summary: This patent describes a memory device with phase change memory (PCM) capable of storing three unique states per cell. It introduces a controller that converts binary data into ternary data at a 3-bit to 2-trit ratio, optimizing data storage efficiency. This approach enhances memory density and performance while reducing power consumption.

Link: US12119064B2

Initializer for circle distribution for image and video compression and posture detection

Patent Number: US20240135750A1

Summary: This patent introduces an initializer for circle distribution on a 2D surface using a polar coordinate system, applicable to image and video compression, motion detection, and posture detection, with extensions to 3D sphere distribution. It describes a hybrid deterministic and stochastic approach for initialization, transitioning from polar to Cartesian coordinates for efficient processing. Additionally, it presents a neural network model compression method using XNOR/AND architectures and a non-linear expressive perceptron (quadtron) to replace traditional multiplication in MAC architectures.

Link: US20240135750A1