Posts created by “Andrey”

Personal Information

Written by Andrey

Links

About me

I am a master's student at Sorbonne Université and an employee at @meshinspector.

My interests are very broad:

  • At MeshInspector I create various algorithms on meshes, point clouds and voxels. My most prominent contributions are the following:
    • Creation of a novel algorithm for CT data post-processing: Subvoxel Correction
    • Significant improvement of an ML-based segmentation algorithm for meshes (per entity success rate reached 95%)
    • Improvement of meshes articulation algorithm (both in accuracy and execution time)
    • Creation of an underparameterized model (based on PCA) of the specific types of objects
    • Creation of a novel voxel-segmentation algorithm based on 3D fully convolutional network
  • Previously, I worked in the domain of satellite data processing
    • I trained a number of computer vision models of different natures (segmentation, style-transfer, detection)
    • I created a universal inference engine (using TensorRT and OpenVino for different types of hardware).
  • I graduated from the Moscow Engineering Physics Institute with a prototype of a dependent-type compiler as my thesis. Through this project, I've learned state-of-the-art approaches and tools on the different levels of abstraction: from high-level type systems like CoC, CoIC, Hindley-Milner, etc. to the MLIR dialects and conversations for the final assembly generation.
  • I enjoy learning abstract mathematics and, sometimes, physics

Projects

Some of the projects from universities could be found on my github page:

@meshinspector also has a public repository where you can see some of my contributions.

Miles Cranmer - The Next Great Scientific Theory is Hiding Inside a Neural Network

Written by Andrey

Original video.

A brief summary of the ideas:

  1. Part 1
    1. PySR -- a symbolic regression solver, which fits the data by finding a mathematical expression via genetic algorithms.
    2. This solver could be used to fit the output of a conventional neural network.
    3. The key part is that a neural network could be dissected and fit part by part. We obviously can't do this with the raw data, but using a conventional neural network as an approximator allows us to get a number of smaller and simpler relations and fit them with the symbolic regression.
  2. Part 2
    1. Using pre-learned models helps, it is especially visible in NLP. It probably works [citation needed] because different domains have shared areas (like, for example, all the modern languages sharing the notions of grammar, punctuation, etc.), thus extending the dataset gives models an expected boost in performance.
    2. Actually, different areas of science also have shared concepts, so it'd be good to find a way to use them.
    3. The corresponding project is Polymathic AI.

(Probably?) open questions:

  • Applications to different models. All the examples seem to come from PDEs-related problems in fluid dynamics. What about other areas like quantum mechanics?
  • Building-in symbolic reasoning into the neural network so that it could "model" pure mathematics (starting from logics/type theory).