Reinvent the wheel - Artificial neural network

Yesterday, after 10 years diving down the ocean of Uniinfo and just some months swimming up, I ended up with a model of my autonoton almost the same as the very hot “artificial neuron” in the current “deep learning” trend. So what I’ve called “uninet” seems to be just the good old “artificial neural network” (ANN), and those 10 years of mine turn out to be just a waste?!!!

That moment, I just smiled hearing all of the critical comments against my work echoing back:

  • - “You stupid! All those great things have been invented by great men and already taken a lot of hard time and effort. So why don’t you just apply them, but always rework from the beginning?!” – My daddy usually scold me.
  • - “Don’t reinvent the wheel!” – My friends usually warn me.
  • - ...[after a long discussion about Uniinfo]... “So, what’s your original (new) point? What’s the difference between yours and the laws stated by philosophers long time ago?!” – Even my closest friend (in the field) sometimes asked me like this, and I just answered: “... uhm... maybe no diff! Actually Uniinfo is just my way to re-understand the Universe, which of course has been understood by millions from the ancient time.
  • - [...]

I just smiled and said in my mind “I know! I know what I’m doing.” I was so pleased and thankful that I’d seen, on the way swimming up (from abstract to concrete), how the whole Universe (via Uniinfo as a philosophy) manifests itself in its building block, the Quantum (a physics-theoric model in UniThread theory), and then how neurons, the building blocks of our brain, as well as autonota, the building blocks of the upcoming “wise computer” (notor), emerge resembling the Quantum. That’s why we can understand the Universe using our neurons.

Then I looked back at the state of the art of ANN, to see what people have known about its mechanism... Almost nothing, but “it works!” What’s the semantics of ANN? Almost nothing, but a crude model of the biological neurons!

Even as a model of neurons in our brains, ANN is so crude with the following man-made artifacts which are different from both biological neurons and autonota:

  1. “Bias” instead of wave: In real life, neurons communicate with each other using a train of firings, just like fireflies, oscillating between accumulating signal (called “action potential”) and firing signal. That’s similar to the “signal waves” of autonota in uninet, where the wave’s frequency (rate of firing) and wave synchronization determines how they communicate. However, the communication between artificial neurons in ANN is very different with just one pass of signal passing from one layer to the next layer where all “neurons” in a layer fire at once, no pulse rate, no frequency. There, the signal firing is determined by a fixed number called “bias” which simulates the threshold of accumulated action potential before firing.
    => Instead of learning the reality using “superposition of waves” like in Fourier series, ANN approximates functions using piecewise functions like in splines.
    => That’s the reason why the predicted results of ANN is usually biased and overfitted.
  2. Full/fixed connection with “weights” instead of live connection forming/pruning: In real life, new connections between neurons are continuously formed and old ones are pruned based on the communication between them. It’s called neural plasticity. But in ANN, the “neurons” are fully connected between layers (in traditional ANN) or have some fixed topology in newer ANNs like CNN. There, the neural plasticity is mimicked merely by modifying the “weights” of connections.
    => That’s “the curse of dimensionality” which costs ANN training a lot of computing power.
  3. Feed-forward (unidirectional) communication instead of communication in circles (with feedback, bidirectional): In real life, there are various neural circuits where the neurons connect with each other in circles. But due to the fear of uncontrollability, most of the researches on ANN only deal with feed-forward signaling, no loop, no feedback! In our brain, only the neurons whose job is specialized in perception, i.e. recognizing features in the input signal, are connected in feed-forward structure. Actually there’s a class of ANN called “recurrent neural networks” (RNN), where the connections can form loops, but they are very hardly researched.
    => That’s why current ANN is good at pattern recognition, but still very far from “general intelligence”, AKA. “strong AI”.

* Source: fb note in Feb 2020: https://www.facebook.com/notes/2720975368144941/

Additional contents:

  • Drawing by Fourier series (an excellent art 😉 by 3Blue1Brown):
    with his beautiful explanation about the math behind: But what is a Fourier series? From heat flow to drawing with circles
  • Versus drawing by splines (to see animation 😉, skip the first minute):
  • Fourier Blackhole, my Processing sketch shows that the "winding machine" of the Fourier transform relates to phase-space shift/rotation in Special Relativity & in the game "Pendulum Wave" 🙂: https://www.openprocessing.org/sketch/849752
  • Actually more than Fourier series, the interunion operation used in autonoton is much more compact and more natural. Fourier series is to interunion as radix system is to continued fraction, eg: e = 2.718281828459045... = [2;1,2,1,1,4,1,1,6,1,1,8,...].

Nhận xét

Bài đăng phổ biến từ blog này

Flan, một cái bánh, ôi quá nhiều cái tên!

Những mẩu chuyện Phá chấp

Chỉ một chữ "Thương"

Giác ngộ toán học

Nhân Duyên Nghiệp Quả

Tính tương đối & trí tuệ về tính Bình đẳng

12:00 am / 12:00 pm ??!

Spirorus, the structure of spacetime ;)

Các tầng Ý nghĩa của các con Số

Một giấc mơ thú vị & những ý nghĩa sâu sắc