diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 0000000..3721121
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,4 @@
+assets/blog/**/* filter=lfs diff=lfs merge=lfs -text
+assets/fonts/* filter=lfs diff=lfs merge=lfs -text
+assets/icons/* filter=lfs diff=lfs merge=lfs -text
+assets/images/* filter=lfs diff=lfs merge=lfs -text
diff --git a/.gitignore b/.gitignore
index 6b6d602..a96d304 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,9 +1,8 @@
node_modules
public
+package-lock.json
-*.png
-*.jpg
-*.svg
*.log
*.lock
-*.zip
\ No newline at end of file
+*.zip
+*.DS_Store
\ No newline at end of file
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/Deep Learning Framework Benchmarks.md b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/Deep Learning Framework Benchmarks.md
index f8892ea..16b8d2f 100644
--- a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/Deep Learning Framework Benchmarks.md
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/Deep Learning Framework Benchmarks.md
@@ -1,135 +1,3 @@
-# Deep Learning Framework Benchmark
-*Posted on October, 7 2021*
-
-## Preambule
-
-There are few frameworks to work on Deep Learning neural networks, I used to be very familiar with Tensorflow back in the days when the second version was not yet released. As someone with software engineering background the strictness and clarity of this first version of Tensorflow was a joy. Also the graph outputed by tensorboard were amazing to the point of getting the habits of debugging my networks from tensorboard most of the time.
-
-
-
-### Graph showed in tensorboard from Tensorflow version 1
-
-
-
-
-
-
-
-Those days are gone, now a new era of dynamic programming came to the Deep Learning field with PyTorch becoming increasingly popular (from what I have experienced), the second version of Tensorflow converging to the same API and Jax going a step further with near-python programming paradigm. The dynamic paradigm has some very nice points, especially if you do reinforcement learning it makes things way easier.
-
-There are also other frameworks I haven't yet tested like [MXNet](https://mxnet.apache.org/versions/1.8.0/).
-
-Now most of the frameworks I have experienced have nearly the same API and ONNX brings a very nice way to output the final result of trainings independently of the framework. Thus choosing which one to use is getting less clear than before.
-
-Lately I have been trying out some RNN-like network with different modification to improve the infamous *long term memory* problem (Hopefully I will post something about that latter). Using PyTorch I feel very frustrated that the included [LSTM layer](https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html) was running very well but **an equivalent code would run several times slower** (around 1/3), even following the [official documentation on GPU optimizations](https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/) (which seems deprecated on few points), sometimes too the point of going at 10% of the initial speed. So if I want to do some research I might as well choose a framework that wouldn't work so slow I would need to wait for hours a training that could be performed in minutes. But would other frameworks really give me better performance?
-
-I decided to see for myself how the different framework behave, starting from simple operations and hopefully testing up to whole network trainings.
-
-I will use a naming convention for the frameworks (also called platform in my scripts) tested here:
-
-* TF1 : first version of Tensorflow (verion 1.x), as of this writting the latest version is 1.15
-* TF2 : second version of Tensorflow (verion 2.x), as of this writting the latest version is 2.6
-* TF2_V1 : second version of Tensorflow but using the compatibility API to write as the first version, also disabling the dynamic behaviour (I suspected different performance)
-* Torch : PyTorch
-* Jax
-
-
-## Benchmarking implementation
-
-### No Gradient
-
-This is obvious but PyTorch is very nice for the majority of the time were you need to compute gradients but not here as I started with the most simple operations first. The `requires_grad=False` argument on all tensors does the trick on PyTorch while Tensorflow and Jax don't need any additional care as far as I know.
-
-### Warmup
-
-I have experienced many time on all framework so far that the first run is always several time slower, this is obvious for dynamically allocated tensors of the modern frameworks but I strongly remember this happened too when I was using TF1. To avoid the first run to skew the benchmark each experiment has a small warmup loop:
-
-```
-# warmup
-for _ in range(20):
- self.experiment()
-```
-
-### Optimizations
-
-I had to test with **random tensor at start and before each operation** to be sure that frameworks do not optimize out some already made operations (could be cache), especially since I disabled gradient computation. All my tests showed no difference so I stuck with tensors initially filled with ones.
-
-### Benchmark time
-
-Each operation is benchmarked during an "experiment", to get consistent benchmarking time a first loop is done to estimate the number of operation per second then the loop being benchmarked is run with a fixed number of step from the estimation. This allows to set in a configuration file the time per experiment for statistical stability and avoid unnecessary call to the system clock (CPython not being know for its speed I'd rather have a simple integer increment per loop as overhead).
-
-Latter this could also make a progression bar with ETA possible as the benchmarks can be quiet exhaustive.
-
-## Results
-
-**The code is publicly available [here](https://gitlab.com/corentin-pro/dl_bench). It will output raw data as csv files and their plots. All the data and plot from my machine (NVIDIA GeForce RTX 2060 SUPER) can be downloaded [here](/blog/2021-10-07_Deep%20Learning%20Framework%20Benchmarks/gpu_NVIDIA%20GeForce%20RTX%202060%20SUPER.zip).**
-
-
-
-### Experiment benchmark samples
-
-
-
-
-
-
-As expected the bigger the operations (experiments) the better [GFLOPS](https://en.wikipedia.org/wiki/FLOPS) (Giga floating point operations per second) the GPU can output. So far nothing unexpected.
-
-### Comparisons
-
-Comparison plots are also generated from the experiment data, for now the only comparison are done between 'platforms' (aka framework) but data type comparisons could be interesting in the future. Categories were made to plot subsets of comparisons in order to keep the scale of the y axis linear, the script will automatically switch to logarithmic scale if needed in the general case. The categories are ranges of Mop (Milions of operations) per experiment like `MEDIUM = [20, 1000]` (there is SMALL, MEDIUM, LARGE and VERY_LARGE) and can be changed in the configuration files.
-
-
-
-### Comparison samples
-
-
-
-
-
-
-
-**NOTE** : all operations with the `nn` prefix means that it is run inside a 'module' (or equivalent), in Jax for instace I used `stax` and `jit` as intended by the library. JIT is not needed as far as I tested for Torch.
-
-Torch seems the best for simple and small operation while tensorflow in general seems to have big overheads. Jax does very well once we use the JIT. All frameworks tends to converge more with bigger layers/operations, the XLA based Tensorflow and Jax seems to have slightly better performance there. But for small operations Torch can be orders of magnitude faster!
-
-The result between float32 and float16 are very similar but float64 is different:
-
-* For some reasons TF2 didn't accept matmul on float64 inside a module, I should fix that latter
-* TF2 get better results relatively to other platforms
-* Except for element-wise operations, Torch doesn't have its lead on small operations
-* There is a weird behaviour for the matmul of 800x800 tensors both in Torch and TF2. After additional testing I couldn't figure out why the first runs (even after warmup) were way to fast.
-
-The specific behaviour of the 800x800 matmul in the data (see `run times (s)`) looks like :
-
-```
- experiment run times (s) count ms/matmul Mop/matmul GFLOPS
-300 800x800 @ 800x800 0.03258013725280762 60 0.5430022875467936 1022.72 1883.4543121733468
-[...]
-308 800x800 @ 800x800 0.032579898834228516 60 0.5429983139038086 1022.72 1883.4680952272229
-309 800x800 @ 800x800 0.03258252143859863 60 0.5430420239766439 1022.72 1883.316492728723
-310 800x800 @ 800x800 0.1323096752166748 60 2.2051612536112466 1022.72 463.7846771183555
-311 800x800 @ 800x800 0.2970736026763916 60 4.951226711273193 1022.72 206.55891148579838
-312 800x800 @ 800x800 0.29687929153442383 60 4.947988192240397 1022.72 206.6941068298959
-[...]
-329 800x800 @ 800x800 0.2968714237213135 60 4.947857062021892 1022.72 206.69958472528631
-```
-
-It is the only instance of such a behavior across all operations and even within the matmul benchamrk. Because of this the result plot doesn't look great :
-
-
-
-
-## Conclusion
-
-The results so far confort me into using Torch overall as I usually design small networks but Jax seems to be a very interesting contender. I am surprise the difference on small/medium operations could be that significant between Torch and TF2, I sometimes use my DL framework for GPU accelerated math in other context so it is interesting.
-
-The code is not yet complete and in the future I would like to test for more:
-
-* Convolutions : 1d, 2d, transpose
-* Gradient
-* Optimizer
-* RNN : which was the trigger that started all of this
-* Data transfert? (CPU->GPU and GPU->CPU)
-
-If you have questions or remarks you can contact me or reply the [reddit post](https://www.reddit.com/r/MachineLearning/comments/q2y9n5/d_deep_learning_framework_benchmark/).
\ No newline at end of file
+version https://git-lfs.github.com/spec/v1
+oid sha256:275efef9888de714403821eea0fa50715428ac959053d460beeeb3f2af748201
+size 10025
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_add_float32.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_add_float32.png
new file mode 100644
index 0000000..dd4fa42
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_add_float32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0bcf8fd1154da11023f1d8800e53ea6183390c602def4b6f8c70e2983b28c03
+size 62125
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_jax_nn_dense_float32.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_jax_nn_dense_float32.png
new file mode 100644
index 0000000..e2986ca
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_jax_nn_dense_float32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a68953c2e45fbf9e255e90f10cc8c3f6d58d3471ab2ea85112c3b1ee1f0a7dab
+size 169172
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_matmul_float64_LARGE.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_matmul_float64_LARGE.png
new file mode 100644
index 0000000..8398e69
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_matmul_float64_LARGE.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37352e1a97614fb5f8facaf79fad8cdfac36d5f93bbec78915e66a07b504489c
+size 48364
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_nn_dense_float32_MEDIUM.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_nn_dense_float32_MEDIUM.png
new file mode 100644
index 0000000..793649c
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_nn_dense_float32_MEDIUM.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:49e5e60890aa7567b09afed26a5affc1a29fe66d7275c17cf28d3f6dea67ec29
+size 46611
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_nn_dense_x5_float32_VERY_LARGE.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_nn_dense_x5_float32_VERY_LARGE.png
new file mode 100644
index 0000000..8f90d9c
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_nn_dense_x5_float32_VERY_LARGE.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c526add0e870db621459780a55299a03c6413a23645b9f79c1e8d4e3dbd51a09
+size 45059
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_torch_matmul_float32.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_torch_matmul_float32.png
new file mode 100644
index 0000000..9bbde6c
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/result_torch_matmul_float32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:01f3c352ea53bd70d346831a9b226fb206fdb9d21b3edd784f3f06671c6ed1b3
+size 124574
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph.png
new file mode 100644
index 0000000..5ef2d8b
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ca83044fb1946ae70e04211eea974516f6984add55477386c2809be48ae0691
+size 57059
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph_network.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph_network.png
new file mode 100644
index 0000000..63f3216
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph_network.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eaf51ddb1ad577c6495664e5c27bffe67886a8a15ebadaed7556b476fda467a8
+size 367618
diff --git a/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph_train.png b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph_train.png
new file mode 100644
index 0000000..2056ed6
--- /dev/null
+++ b/assets/blog/2021-10-07_Deep Learning Framework Benchmarks/tf1_graph_train.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61fba79f5c0d05973c87e405534c776d830d555e5f277abecb15f613365ad1d9
+size 284868
diff --git a/assets/fonts/open_sans.ttf b/assets/fonts/open_sans.ttf
new file mode 100644
index 0000000..49f2be6
--- /dev/null
+++ b/assets/fonts/open_sans.ttf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1b1331ba90e949be8664b073976b4f0369b831f381e13e506d728e50ce29083
+size 529700
diff --git a/assets/icons/arrow_forward.svg b/assets/icons/arrow_forward.svg
new file mode 100644
index 0000000..bb80213
--- /dev/null
+++ b/assets/icons/arrow_forward.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84e04cf4015194b627138ebe60f91d71bf8ad210f11797c8224a03d2674e6cc8
+size 184
diff --git a/assets/icons/brain.svg b/assets/icons/brain.svg
new file mode 100644
index 0000000..94a6a68
--- /dev/null
+++ b/assets/icons/brain.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b8f6c12952c949795727cd17baf68ad01577bec249dcddac16f9aa065368619
+size 2191
diff --git a/assets/icons/brightness_high.svg b/assets/icons/brightness_high.svg
new file mode 100644
index 0000000..496905a
--- /dev/null
+++ b/assets/icons/brightness_high.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ecdf4f32a9b6d7a52917516ef2a6dd954a076c1d6fd946ca20a597ce935f1cd6
+size 360
diff --git a/assets/icons/brightness_medium.svg b/assets/icons/brightness_medium.svg
new file mode 100644
index 0000000..27017eb
--- /dev/null
+++ b/assets/icons/brightness_medium.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fffc9aef6e1758f68389e4d9f264e1d7aa88f91159149d3b1504f8e7d69c5492
+size 283
diff --git a/assets/icons/cloud_server.svg b/assets/icons/cloud_server.svg
new file mode 100644
index 0000000..8575e9b
--- /dev/null
+++ b/assets/icons/cloud_server.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b9255f2859771d168123e413f54a146c90870e30268e23bc52449d664bf22dbe
+size 1187
diff --git a/assets/icons/favicon.png b/assets/icons/favicon.png
new file mode 100644
index 0000000..2b89e5e
--- /dev/null
+++ b/assets/icons/favicon.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a8f1e8027319b82e34d149b6223907f9b11098b5160c55486db22b59a9f3dad
+size 2953
diff --git a/assets/icons/favicon.svg b/assets/icons/favicon.svg
new file mode 100644
index 0000000..bc93239
--- /dev/null
+++ b/assets/icons/favicon.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f6171b3c6821c6487e9d134077ce0feebc216e23462ecbb8fcd4a1ec737f2643
+size 972
diff --git a/assets/icons/github.svg b/assets/icons/github.svg
new file mode 100644
index 0000000..59c2399
--- /dev/null
+++ b/assets/icons/github.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30bb2da2ef2b1fd4adaaee5ddcd2c50cbe80d10b3ecad9e5a023964d55938b45
+size 1662
diff --git a/assets/icons/gitlab.svg b/assets/icons/gitlab.svg
new file mode 100644
index 0000000..a0dad99
--- /dev/null
+++ b/assets/icons/gitlab.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:238bbcf83ceb82a4324f5b1ea93e9e7bf09988de8c967ae6a71c54f35febb23d
+size 1103
diff --git a/assets/icons/linkedin.svg b/assets/icons/linkedin.svg
new file mode 100644
index 0000000..2590d21
--- /dev/null
+++ b/assets/icons/linkedin.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec5d28be227369573677a7e1277e873a019a40d264c138a0c79ed39d8e02bc0e
+size 672
diff --git a/assets/icons/logo.svg b/assets/icons/logo.svg
new file mode 100644
index 0000000..a3082be
--- /dev/null
+++ b/assets/icons/logo.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97ac6e6b60d235fe6a7844741a15d8b464b896c4e689cfa23a01371fc45dd979
+size 2803
diff --git a/assets/icons/logo_old.svg b/assets/icons/logo_old.svg
new file mode 100644
index 0000000..14d9bda
--- /dev/null
+++ b/assets/icons/logo_old.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e2245ade02c8a8b0a7fe7ec17d6c5ccb5cbc9dc72b193a84fd7ae90f6ab58fb
+size 4438
diff --git a/assets/icons/translate.svg b/assets/icons/translate.svg
new file mode 100644
index 0000000..4160d10
--- /dev/null
+++ b/assets/icons/translate.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a22cda5b8cc6143d165e1fb50bfa59efdf1a2630f742702bfcf983d01f70bf0d
+size 434
diff --git a/assets/icons/window.svg b/assets/icons/window.svg
new file mode 100644
index 0000000..7fb0059
--- /dev/null
+++ b/assets/icons/window.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a000f40a7202ea4143f2f14beb188a518161cf7505273ab337085eb656069da
+size 719
diff --git a/assets/images/kyoto.jpeg b/assets/images/kyoto.jpeg
new file mode 100644
index 0000000..cd1aabc
--- /dev/null
+++ b/assets/images/kyoto.jpeg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e63c8f96888c64cdaafe59e19e942248744add3d2cb973ce03f2b39168348fe8
+size 371764
diff --git a/assets/images/undraw_Artificial_intelligence_oyxx.svg b/assets/images/undraw_Artificial_intelligence_oyxx.svg
new file mode 100644
index 0000000..ef78c44
--- /dev/null
+++ b/assets/images/undraw_Artificial_intelligence_oyxx.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:062b3b8da7335441f90ba31cfb945c8ab2ab4694ade90275789f88e0c24babd0
+size 31281
diff --git a/assets/images/undraw_analytics_5pgy.svg b/assets/images/undraw_analytics_5pgy.svg
new file mode 100644
index 0000000..2cdcf44
--- /dev/null
+++ b/assets/images/undraw_analytics_5pgy.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce07b3c483fa43ff01b87ae58f771d3bf3fcbf8dce761fa3d3cfb705a72d0401
+size 5108
diff --git a/assets/images/undraw_design_data_khdb.svg b/assets/images/undraw_design_data_khdb.svg
new file mode 100644
index 0000000..d27daf3
--- /dev/null
+++ b/assets/images/undraw_design_data_khdb.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11afa0c6eac39466c083384a6818b80382ceafecfa42cda1b4ad9848f5db7328
+size 7168
diff --git a/assets/images/undraw_detailed_analysis_xn7y.svg b/assets/images/undraw_detailed_analysis_xn7y.svg
new file mode 100644
index 0000000..8a6980d
--- /dev/null
+++ b/assets/images/undraw_detailed_analysis_xn7y.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98e555225a5a73ff9ccb57ce9b3d347438fde31c8737a855aabb714316d4fe2d
+size 9292
diff --git a/assets/images/undraw_maintenance_cn7j.svg b/assets/images/undraw_maintenance_cn7j.svg
new file mode 100644
index 0000000..34fc687
--- /dev/null
+++ b/assets/images/undraw_maintenance_cn7j.svg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2b4f44fe1e3dd55cc0b3a5d29c7e55450f3c76c8617c7dc70f40612a9047670
+size 16874
diff --git a/assets/theme/dark.css b/assets/theme/dark.css
index 540bc51..4f46972 100644
--- a/assets/theme/dark.css
+++ b/assets/theme/dark.css
@@ -1,7 +1,8 @@
:root
{
- --main-bg-color: #21262b;
+ --main-bg-color: #1a1b21;
--main-fg-color: #b0b0b0;
+ --strong-fg-color: #fff;
--lighter-bg-color: #31363b;
--lighter-fg-color: #d0d0d0;
--highlight-bg-color: #41464b;
diff --git a/assets/theme/light.css b/assets/theme/light.css
index 19be3bc..8ae8abf 100644
--- a/assets/theme/light.css
+++ b/assets/theme/light.css
@@ -1,8 +1,9 @@
:root
{
- --main-bg-color: #e0e0e0;
- --main-fg-color: #41464b;
- --lighter-bg-color: #c8c8c8;
+ --main-bg-color: #fff;
+ --main-fg-color: #53585e;
+ --strong-fg-color: #000;
+ --lighter-bg-color: #ededed;
--lighter-fg-color: #31363b;
--highlight-bg-color: #b0b0b0;
--highlight-fg-color: #21262b;
diff --git a/package.json b/package.json
index b27ad49..bdf2728 100644
--- a/package.json
+++ b/package.json
@@ -5,27 +5,30 @@
"author": "Risselin Corentin",
"license": "MIT",
"dependencies": {
- "@babel/cli": "^7.7.0",
- "@babel/core": "^7.7.2",
- "@babel/preset-env": "^7.7.1",
- "babel-loader": "^8.0.6",
- "babel-plugin-inferno": "^6.1.0",
- "compression-webpack-plugin": "^9.0.0",
- "inferno": "^7.3.2",
- "inferno-redux": "^7.3.3",
- "inferno-router": "^7.4.10",
- "redux": "^4.0.5",
- "webpack": "5.56.1",
- "webpack-cli": "^4.8.0"
+ "@babel/cli": "7.22.10",
+ "@babel/core": "7.22.11",
+ "@babel/preset-env": "7.22.10",
+ "babel-loader": "9.1.3",
+ "babel-plugin-inferno": "6.6.1",
+ "compression-webpack-plugin": "10.0.0",
+ "inferno": "8.2.2",
+ "inferno-redux": "8.2.2",
+ "inferno-router": "8.2.2",
+ "redux": "4.2.1",
+ "webpack": "5.88.2",
+ "webpack-cli": "5.1.4"
},
"devDependencies": {
- "copy-webpack-plugin": "^9.0.1",
- "css-loader": "^6.3.0",
- "marked": "^3.0.4",
- "sass": "^1.42.1",
- "sass-loader": "^12.1.0",
- "style-loader": "^3.3.0",
- "webpack-dev-server": "^4.3.1"
+ "copy-webpack-plugin": "11.0.0",
+ "css-loader": "6.8.1",
+ "marked": "7.0.5",
+ "sass": "1.66.1",
+ "sass-loader": "13.3.2",
+ "style-loader": "3.3.3",
+ "webpack-dev-server": "4.15.1"
+ },
+ "engines": {
+ "node": "16.20.0"
},
"scripts": {
"build": "webpack --config webpack.prod.js",
diff --git a/src/blog.css b/src/blog.css
index c821e6a..7b79120 100644
--- a/src/blog.css
+++ b/src/blog.css
@@ -8,21 +8,24 @@
.blog > a
{
display: flex;
+ flex-direction: column;
+ align-items: flex-start;
text-decoration: none;
max-width: 1000px;
- margin: 20px auto;
- padding: 10px;
+ margin: 1rem auto;
+ padding: 1.5rem 2rem;
background-color: var(--dim-bg-color);
}
.blog a h2
{
- flex-grow: 1;
- margin: 20px 60px 20px 40px;
+ font-size: 1.25rem;
+ font-weight: 300;
}
.blog a .date
{
- font-size: 0.7em;
- align-self: end;
+ margin-top: 0.25rem;
+ font-size: 0.75rem;
+ font-weight: 600;
}
\ No newline at end of file
diff --git a/src/blog_entry.scss b/src/blog_entry.scss
index 63dc88e..7fe98b4 100644
--- a/src/blog_entry.scss
+++ b/src/blog_entry.scss
@@ -5,7 +5,6 @@
padding: 20px;
max-width: 1200px;
line-height: 1.8rem;
- font-family: "open sans", sans;
h1, h2, h3
{
@@ -63,22 +62,29 @@
font-weight: normal;
font-style: italic;
}
-
+
p
{
display: flex;
align-items: center;
justify-content: center;
+ width: 100%;
}
-
+
img
{
- margin: 0 10px;
+ display: block;
+ margin: 0 0.5rem;
object-fit: contain;
- width: 100%;
+ overflow: auto;
cursor: pointer;
}
}
+
+ p img {
+ display: block;
+ margin: auto;
+ }
}
#image-container
diff --git a/src/home.jsx b/src/home.jsx
index 5ab1b12..a1cf9e7 100644
--- a/src/home.jsx
+++ b/src/home.jsx
@@ -8,29 +8,123 @@ import ArrowSvg from '../assets/icons/arrow_forward.svg'
import Flow1Svg from '../assets/images/undraw_maintenance_cn7j.svg'
import Flow2Svg from '../assets/images/undraw_design_data_khdb.svg'
import Flow3Svg from '../assets/images/undraw_Artificial_intelligence_oyxx.svg'
+import headerImage from '../assets/images/kyoto.jpeg'
+import brainSVG from '../assets/icons/brain.svg'
+import windowSVG from '../assets/icons/window.svg'
+import cloudServerSVG from '../assets/icons/cloud_server.svg'
+import linkedInSVG from '../assets/icons/linkedin.svg'
+import githubSVG from '../assets/icons/github.svg'
+import gitlabSVG from '../assets/icons/gitlab.svg'
+
+
+class HomeComponent extends Component {
+
+ render() {
+ const { t } = this.props
-class HomeComponent extends Component
-{
- render()
- {
return (
-