diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml index 7cc154d..f680ea0 100644 --- a/.github/workflows/deploy.yml +++ b/.github/workflows/deploy.yml @@ -2,7 +2,13 @@ name: CI on: push: - branches: [ "main" ] + branches: + - main + tags: + - 'v*' + pull_request: + branches: + - main jobs: build: @@ -57,7 +63,8 @@ jobs: run: | git add . COMMIT_MESSAGE="Deploying on $(date "+%Y-%m-%d %H:%M:%S")" - git commit -m "${COMMIT_MESSAGE}" + git commit --allow-empty -m "${COMMIT_MESSAGE}" - name: GIT Deploy + if: ${{ github.ref == 'refs/heads/main' }} run: git push origin gh-pages diff --git a/config.toml b/config.toml index fdf6d07..d5ed529 100644 --- a/config.toml +++ b/config.toml @@ -20,10 +20,10 @@ googleAnalytics = "" # update google analytics id url = "download/" weight = 1 -[[menu.main]] - name = "Documentation" - url = "https://docs.lfortran.org/" - weight = 2 +#[[menu.main]] +# name = "Documentation" +# url = "https://docs.lfortran.org/" +# weight = 2 [[menu.main]] name = "Blog" diff --git a/content/_index.md b/content/_index.md index 000fb10..d72f2f2 100644 --- a/content/_index.md +++ b/content/_index.md @@ -5,19 +5,19 @@ features that are being implemented. ## Works today -* **Best possible performance for numerical, array-oriented code** +* **Best possible performance for numerical, array-oriented code** LPython gives you the speed you need for your numerical, array-oriented code. With LPython, you can write Python code that is as fast as C or C++. This is because LPython compiles your code to optimized machine code, which is the fastest way to run code on a computer. -* **Code compatability with CPython** +* **Code compatibility with CPython** If LPython compiles and runs a code, then it will run in CPython. -* **Seamless interoperability with CPython** +* **Seamless interoperability with CPython** LPython can call functions in CPython libraries. This feature permits “break-out” to Numpy, TensorFlow, PyTorch, and even to matplotlib. The break-outs will run at ordinary (slow) Python speeds, but LPython accelerates the mathematical portions to near maximum speed. -* **Just-In-Time (JIT) compilation** - LPython also supports Just-in-time compilation which requires only decorating Python function with @lpython. One can also specify the desired backend, as in, `@lpython(backend=“c”)` or `@lpython(backend=“llvm”)`. Only C is supported at present; LLVM and others will be added in the near future. +* **Just-In-Time (JIT) compilation** + LPython also supports Just-in-time compilation which requires only decorating Python function with `@lpython`. One can also specify the desired backend, as in, `@lpython(backend=“c”)` or `@lpython(backend=“llvm”)`. Only C is supported at present; LLVM and others will be added in the near future. -* **Clean, modular design, usable as a library** +* **Clean, modular design, usable as a library** LPython is structured around two independent modules, AST (Abstract Syntax Tree) and ASR (Abstract Semantic Representation), both of which are standalone (completely independent of the rest of LPython) and users are @@ -26,13 +26,13 @@ features that are being implemented. [Developer Tutorial](https://docs.lfortran.org/developer_tutorial/) documents for more details. -* **Create executables** +* **Create executables** It can create fast optimized executables unlike other interpreted compilers. -* **Runs on Linux, Mac, Windows and WebAssembly** +* **Runs on Linux, Mac, Windows and WebAssembly** All four platforms are regularly tested by our CI. -* **Several backends** +* **Several backends** The LLVM can be used to compile to binaries and for interactive usage. The C/C++ backend translates Python code to a readable C/C++ code. The x86 backend allows very fast compilation directly to x86 machine code. The WebAssembly @@ -43,23 +43,62 @@ features that are being implemented. These features are under development: -* **Interactive, Jupyter support** +* **Interactive, Jupyter support** LPython is coming soon to Jupyter. It can be used as a Jupyter kernel, allowing Python/Julia-style rapid prototyping and an exploratory workflow (`conda install jupyter lpython`). It can also be used from the command-line with an interactive prompt (REPL). -* **Support for diverse hardware** +* **Support for diverse hardware** LLVM makes it possible to run LPython on diverse hardware. We plan to support a wide range of hardware platforms, including: - CPUs: compile Python code to run on CPUs of all architectures, including x86, ARM, and POWER. - GPUs: compile Python code to run on GPUs from NVIDIA, AMD, and Intel. - TPUs: compile Python code to run on TPUs from Google. + - CPUs: compile Python code to run on CPUs of all architectures, including x86, ARM, and POWER. + - GPUs: compile Python code to run on GPUs from NVIDIA, AMD, and Intel. + - TPUs: compile Python code to run on TPUs from Google. Please vote on issues in our [issue tracker] that you want us to prioritize (feel free to create new ones if we are missing anything). +## Links to other available Python compilers: +Name | Total Contributors | Total stars +--|--|-- +[PyTorch](https://github.com/pytorch/pytorch) | 2857 | 69253 +[Pyston](https://github.com/pyston/pyston) | 1263 | 2426 +[mypyc](https://github.com/mypyc/mypyc) | 627 | 15995 +[JAX](https://github.com/google/jax) | 523 | 24010 +[MicroPython](https://github.com/micropython/micropython) | 520 | 16998 +[Cython](https://github.com/cython/cython) | 435 | 8168 +[Numba](https://github.com/numba/numba) | 306 | 8790 +[CuPy](https://github.com/cupy/cupy) | 286 | 7062 +[Taichi](https://github.com/taichi-dev/taichi) | 226 | 24023 +[PyPy](https://www.pypy.org/) | 213 | - +[Triton](https://github.com/openai/triton) | 155 | 7846 +[Nuitka](https://github.com/Nuitka/Nuitka) | 138 | 9385 +[Brython](https://github.com/brython-dev/brython) | 94 | 6058 +[Skulpt](https://github.com/skulpt/skulpt) | 91 | 3256 +[Pythran](https://github.com/serge-sans-paille/pythran) | 58 | 1912 +[DaCe](https://github.com/spcl/dace) | 58 | 419 +[LPython](https://github.com/lcompilers/lpython) | 44 | 1135 +[Weld](https://github.com/weld-project/weld) | 35 | 2945 +[IronPython](https://github.com/IronLanguages/ironpython3) | 33 | 2179 +[Transcrypt](https://github.com/TranscryptOrg/transcrypt) | 33 | 2727 +[Pyccel](https://github.com/pyccel/pyccel) | 32 | 279 +[Pyjs](https://github.com/pyjs/pyjs) | 30 | 1123 +[Grumpy](https://github.com/google/grumpy) | 29 | 10580 +[Mojo](https://github.com/modularml/mojo) | 26 | 15569 +[uarray](https://github.com/Quansight-Labs/uarray) | 22 | 98 +[Shedskin](https://github.com/shedskin/shedskin) | 20 | 701 +[Jython](https://github.com/jython/jython) | 18 | 897 +[Codon](https://github.com/exaloop/codon) | 12 | 13431 +[Compyle](https://github.com/pypr/compyle) | 11 | 67 +[Seq](https://github.com/seq-lang/seq) | 9 | 680 +[Hope](https://github.com/jakeret/hope) | 6 | 385 +[Transonic](https://github.com/fluiddyn/transonic) | 3 | 105 + +Note: we use "-" if there is no github repository. If any compiler is missing, +or the stats are inaccurate, please let us know. + [issue tracker]: https://github.com/lcompilers/lpython/issues diff --git a/content/benefits/benefit-4.md b/content/benefits/benefit-4.md index 45a8490..c572c28 100644 --- a/content/benefits/benefit-4.md +++ b/content/benefits/benefit-4.md @@ -1,5 +1,5 @@ --- title: "Just-In-Time (JIT)" -icon: "fa fa-cogs" +icon: "fa fa-clock" --- LPython supports Just-in-time compilation which requires only decorating Python function with @lpython. diff --git a/content/benefits/benefit-5.md b/content/benefits/benefit-5.md index dc65bec..bbe77dc 100644 --- a/content/benefits/benefit-5.md +++ b/content/benefits/benefit-5.md @@ -1,6 +1,6 @@ --- title: "Interoperability with CPython" -icon: "fa fa-python" +icon: "fa fa-cogs" --- LPython offers seamless interoperability with CPython. One can easily call functions in CPython libraries diff --git a/content/benefits/benefit-6.md b/content/benefits/benefit-6.md index 1ef7e69..3bdd7a8 100644 --- a/content/benefits/benefit-6.md +++ b/content/benefits/benefit-6.md @@ -2,4 +2,4 @@ title: "Open source" icon: "fas fa-code-branch" --- -LPython, being an open-source project, enjoys the advantages of cost-effectiveness, transparency, community collaboration, flexibility, rapid bug resolution, enhanced security, sustainability, knowledge exchange, worldwide support. +LPython, being an open-source project, enjoys the advantages of community collab, transparency, rapid bug resolution, enhanced security, knowledge exchange and more. diff --git a/content/blog/images/color.png b/content/blog/images/color.png new file mode 100644 index 0000000..97f81cb Binary files /dev/null and b/content/blog/images/color.png differ diff --git a/content/blog/images/graph.png b/content/blog/images/graph.png new file mode 100644 index 0000000..746fc08 Binary files /dev/null and b/content/blog/images/graph.png differ diff --git a/content/blog/images/gray.png b/content/blog/images/gray.png new file mode 100644 index 0000000..85f8748 Binary files /dev/null and b/content/blog/images/gray.png differ diff --git a/content/blog/images/lcompilers_diagram.png b/content/blog/images/lcompilers_diagram.png new file mode 100644 index 0000000..85625d1 Binary files /dev/null and b/content/blog/images/lcompilers_diagram.png differ diff --git a/content/blog/lpython_mvp.md b/content/blog/lpython_mvp.md index 750c5d2..b567f68 100644 --- a/content/blog/lpython_mvp.md +++ b/content/blog/lpython_mvp.md @@ -1,8 +1,8 @@ --- -title: "LPython: Making Python faster with LLVM" +title: "LPython: Novel, Fast, Retargetable Python Compiler" date: 2023-07-28 tags: ["Python", "Announcement"] -author: "[Ondřej Čertík](https://ondrejcertik.com/), [Brian Beckman](https://www.linkedin.com/in/brianbeckman), [Gagandeep Singh](https://github.com/czgdp1807), [Thirumalai Shaktivel](https://www.linkedin.com/in/thirumalai-shaktivel/), [Rohit Goswami](https://rgoswami.me), [Smit Lunagariya](https://www.linkedin.com/in/smit-lunagariya-356b93179/), [Ubaid Shaikh](https://Shaikh-Ubaid.github.io/), [Pranav Goswami](https://www.linkedin.com/in/pranavgoswami1/)" +author: "[Ondřej Čertík](https://ondrejcertik.com/), [Brian Beckman](https://www.linkedin.com/in/brianbeckman), [Gagandeep Singh](https://github.com/czgdp1807), [Thirumalai Shaktivel](https://www.linkedin.com/in/thirumalai-shaktivel/), [Smit Lunagariya](https://www.linkedin.com/in/smit-lunagariya-356b93179/), [Ubaid Shaikh](https://Shaikh-Ubaid.github.io/), [Naman Gera](https://github.com/namannimmo10), [Pranav Goswami](https://www.linkedin.com/in/pranavgoswami1/), [Rohit Goswami](https://rgoswami.me), [Dominic Poerio](https://github.com/dpoerio), [Akshānsh Bhatt](https://github.com/akshanshbhatt), [Virendra Kabra](https://www.linkedin.com/in/virendrakabra/), [Luthfan Lubis](https://github.com/ansharlubis)" type: post draft: false --- @@ -11,11 +11,16 @@ draft: false LPython is a Python compiler that can compile type-annotated Python code to optimized machine code. LPython offers several backends such as LLVM, C, C++, WASM, Julia and x86. LPython features quick compilation and runtime performance, as we show in the benchmarks in this blog. LPython also offers Just-In-Time (JIT) compilation and seamless interoperability with CPython. +We are releasing an alpha version of LPython, meaning it is expected you +encounter bugs when you use it (please report them!). You can install it using +Conda (`conda install -c conda-forge lpython`), or build from +[source](https://github.com/lcompilers/lpython). + Based on the novel Abstract Semantic Representation (ASR) shared with LFortran, LPython's intermediate optimizations are independent of the backends and frontends. The two compilers, LPython and LFortran, share all benefits of improvements at the ASR level. "Speed" is the chief tenet of the LPython project. Our objective is to produce a compiler that both runs exceptionally fast and generates exceptionally fast code. In this blog, we describe features of LPython including Ahead-of-Time (AoT) compilation, JIT compilation, and interoperability with CPython. We also showcase LPython's performance against its competitors such as Numba and C++ via several benchmarks. -![LCompilers-Diagram](https://hackmd.io/_uploads/rJFejQpc3.png) +![LCompilers-Diagram](https://lpython.org/blog/images/lcompilers_diagram.png) ## Features of LPython @@ -45,7 +50,91 @@ for i0 in range(0, length_dim_0): After applying all the ASR-to-ASR passes, LPython sends the final ASR to the backends selected by the user, via command-line arguments like, `--show-c` (generates C code), `--show-llvm` (generates LLVM code). - +One can also see the generated C or LLVM code using the following +```py +from lpython import i32 + +def main(): + x: i32 + x = (2+3)*5 + print(x) + +main() +``` +```c +$ lpython examples/expr2.py --show-c +#include + +#include +#include +#include +#include +#include + +void main0(); +void __main____global_statements(); + +// Implementations +void main0() +{ + int32_t x; + x = (2 + 3)*5; + printf("%d\n", x); +} + +void __main____global_statements() +{ + main0(); +} + +int main(int argc, char* argv[]) +{ + _lpython_set_argv(argc, argv); + __main____global_statements(); + return 0; +} +``` +```llvm +$ lpython examples/expr2.py --show-llvm +; ModuleID = 'LFortran' +source_filename = "LFortran" + +@0 = private unnamed_addr constant [2 x i8] c" \00", align 1 +@1 = private unnamed_addr constant [2 x i8] c"\0A\00", align 1 +@2 = private unnamed_addr constant [5 x i8] c"%d%s\00", align 1 + +define void @__module___main_____main____global_statements() { +.entry: + call void @__module___main___main0() + br label %return + +return: ; preds = %.entry + ret void +} + +define void @__module___main___main0() { +.entry: + %x = alloca i32, align 4 + store i32 25, i32* %x, align 4 + %0 = load i32, i32* %x, align 4 + call void (i8*, ...) @_lfortran_printf(i8* getelementptr inbounds ([5 x i8], [5 x i8]* @2, i32 0, i32 0), i32 %0, i8* getelementptr inbounds ([2 x i8], [2 x i8]* @1, i32 0, i32 0)) + br label %return + +return: ; preds = %.entry + ret void +} + +declare void @_lfortran_printf(i8*, ...) + +define i32 @main(i32 %0, i8** %1) { +.entry: + call void @_lpython_set_argv(i32 %0, i8** %1) + call void @__module___main_____main____global_statements() + ret i32 0 +} + +declare void @_lpython_set_argv(i32, i8**) +``` ### Machine Independent Code Optimisations @@ -58,11 +147,438 @@ LPython implements several machine-independent optimisations via ASR-to-ASR pass 5. Transforming division to multiplication operation 6. Fused multiplication and addition -All optimizations are applied via one command-line argument, --fast. To select individual optimizations instead, write a command-line argument like the following: +All optimizations are applied via one command-line argument, `--fast`. To select individual optimizations instead, write a command-line argument like the following: `--pass=inline_function_calls,loop_unroll` - +Following is an examples of ASR and transformed ASR after applying the optimisations + +```py +from lpython import i32 + +def compute_x() -> i32: + return (2 * 3) ** 1 + 2 + +def main(): + x: i32 = compute_x() + print(x) + +main() +``` +```clojure +$ lpython examples/expr2.py --show-asr +(TranslationUnit + (SymbolTable + 1 + { + __main__: + (Module + (SymbolTable + 2 + { + __main____global_statements: + (Function + (SymbolTable + 5 + { + + }) + __main____global_statements + (FunctionType + [] + () + Source + Implementation + () + .false. + .false. + .false. + .false. + .false. + [] + [] + .false. + ) + [main] + [] + [(SubroutineCall + 2 main + () + [] + () + )] + () + Public + .false. + .false. + () + ), + compute_x: + (Function + (SymbolTable + 3 + { + _lpython_return_variable: + (Variable + 3 + _lpython_return_variable + [] + ReturnVar + () + () + Default + (Integer 4) + () + Source + Public + Required + .false. + ) + }) + compute_x + (FunctionType + [] + (Integer 4) + Source + Implementation + () + .false. + .false. + .false. + .false. + .false. + [] + [] + .false. + ) + [] + [] + [(= + (Var 3 _lpython_return_variable) + (IntegerBinOp + (IntegerBinOp + (IntegerBinOp + (IntegerConstant 2 (Integer 4)) + Mul + (IntegerConstant 3 (Integer 4)) + (Integer 4) + (IntegerConstant 6 (Integer 4)) + ) + Pow + (IntegerConstant 1 (Integer 4)) + (Integer 4) + (IntegerConstant 6 (Integer 4)) + ) + Add + (IntegerConstant 2 (Integer 4)) + (Integer 4) + (IntegerConstant 8 (Integer 4)) + ) + () + ) + (Return)] + (Var 3 _lpython_return_variable) + Public + .false. + .false. + () + ), + main: + (Function + (SymbolTable + 4 + { + x: + (Variable + 4 + x + [] + Local + () + () + Default + (Integer 4) + () + Source + Public + Required + .false. + ) + }) + main + (FunctionType + [] + () + Source + Implementation + () + .false. + .false. + .false. + .false. + .false. + [] + [] + .false. + ) + [compute_x] + [] + [(= + (Var 4 x) + (FunctionCall + 2 compute_x + () + [] + (Integer 4) + () + () + ) + () + ) + (Print + () + [(Var 4 x)] + () + () + )] + () + Public + .false. + .false. + () + ) + }) + __main__ + [] + .false. + .false. + ), + main_program: + (Program + (SymbolTable + 6 + { + __main____global_statements: + (ExternalSymbol + 6 + __main____global_statements + 2 __main____global_statements + __main__ + [] + __main____global_statements + Public + ) + }) + main_program + [__main__] + [(SubroutineCall + 6 __main____global_statements + 2 __main____global_statements + [] + () + )] + ) + }) + [] +) +``` +```clojure +$ lpython examples/expr2.py --show-asr --pass=inline_function_calls,unused_functions +(TranslationUnit + (SymbolTable + 1 + { + __main__: + (Module + (SymbolTable + 2 + { + __main____global_statements: + (Function + (SymbolTable + 5 + { + + }) + __main____global_statements + (FunctionType + [] + () + Source + Implementation + () + .false. + .false. + .false. + .false. + .false. + [] + [] + .false. + ) + [main] + [] + [(SubroutineCall + 2 main + () + [] + () + )] + () + Public + .false. + .false. + () + ), + main: + (Function + (SymbolTable + 4 + { + _lpython_return_variable_compute_x: + (Variable + 4 + _lpython_return_variable_compute_x + [] + Local + () + () + Default + (Integer 4) + () + Source + Public + Required + .false. + ), + x: + (Variable + 4 + x + [] + Local + () + () + Default + (Integer 4) + () + Source + Public + Required + .false. + ), + ~empty_block: + (Block + (SymbolTable + 7 + { + + }) + ~empty_block + [] + ) + }) + main + (FunctionType + [] + () + Source + Implementation + () + .false. + .false. + .false. + .false. + .false. + [] + [] + .false. + ) + [] + [] + [(= + (Var 4 _lpython_return_variable_compute_x) + (IntegerBinOp + (IntegerBinOp + (IntegerBinOp + (IntegerConstant 2 (Integer 4)) + Mul + (IntegerConstant 3 (Integer 4)) + (Integer 4) + (IntegerConstant 6 (Integer 4)) + ) + Pow + (IntegerConstant 1 (Integer 4)) + (Integer 4) + (IntegerConstant 6 (Integer 4)) + ) + Add + (IntegerConstant 2 (Integer 4)) + (Integer 4) + (IntegerConstant 8 (Integer 4)) + ) + () + ) + (GoTo + 1 + __1 + ) + (BlockCall + 1 + 4 ~empty_block + ) + (= + (Var 4 x) + (Var 4 _lpython_return_variable_compute_x) + () + ) + (Print + () + [(Var 4 x)] + () + () + )] + () + Public + .false. + .false. + () + ) + }) + __main__ + [] + .false. + .false. + ), + main_program: + (Program + (SymbolTable + 6 + { + __main____global_statements: + (ExternalSymbol + 6 + __main____global_statements + 2 __main____global_statements + __main__ + [] + __main____global_statements + Public + ) + }) + main_program + [__main__] + [(SubroutineCall + 6 __main____global_statements + 2 __main____global_statements + [] + () + )] + ) + }) + [] +) +``` ### Ahead-of-Time (AoT) compilation @@ -95,7 +611,7 @@ print(res) ./a.out 0.01s user 0.00s system 89% cpu 0.012 total ``` -You can see that it's very fast. It's still plenty fast with the C backend via the command-line argument --backend=c: +You can see that it's very fast. It's still plenty fast with the C backend via the command-line argument `--backend=c`: ```zsh % time lpython /Users/czgdp1807/lpython_project/debug.py --backend=c @@ -107,13 +623,13 @@ Note that time lpython `/Users/czgdp1807/lpython_project/debug.py --backend=c` i ### Just-In-Time Compilation -Just-in-time compilation in LPython requires only decorating Python function with @lpython. The decorator takes an option for specifying the desired backend, as in, @lpython(backend="c") or @lpython(backend="llvm"). Only C is supported at present; LLVM and others will be added in the near future. The decorator also propagates backend-specific options. For example +Just-in-time compilation in LPython requires only decorating Python function with `@lpython`. The decorator takes an option for specifying the desired backend, as in, `@lpython(backend="c")` or `@lpython(backend="llvm")`. Only C is supported at present; LLVM and others will be added in the near future. The decorator also propagates backend-specific options. For example ```python @lpython(backend="c", - backend_optimization_flags=["-ffast-math", - "-funroll-loops", - "-O1"]) + backend_optimization_flags=["-ffast-math", + "-funroll-loops", + "-O1"]) ``` Note that by default C backend is used without any optimisation flags. @@ -187,7 +703,7 @@ def get_email(text): lpython@lcompilers.org ``` -Note: The `@pythoncall` and `@lpython` decorators are presently supported with just the `C` backend but eventually will work with the LLVM backend and that's work in progress. +*Note*: The `@pythoncall` and `@lpython` decorators are presently supported with just the `C` backend but eventually will work with the LLVM backend and that's work in progress. ## Benchmarks and Demos @@ -200,7 +716,14 @@ We compare JIT compilation of LPython to Numba on **summation of all the element **System Information** -Softwares - The numba version used is `numba-0.57.1`, LPython commit is `a39430386a0e7ea5de2f569e27229017dff78330` and Python version is `Python 3.10.4`. +| Compiler | Version | +|---|---| +| Numba | 0.57.1 | +| LPython | 0.19.0 | +| Python | 3.10.4 | + +
+ @@ -261,6 +784,10 @@ test() | Numba | 0.20 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | LPython | 0.32 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.60 | +
+ + + | Compiler | Execution Time (s) | System | Relative | |---|---|---|---| | LPython | 0.013 | Apple M1 MBP 2020 | 1.00 | @@ -272,6 +799,9 @@ test() | LPython | 0.048 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | Numba | 0.048 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | +
+ + **Pointwise multiplication of two 1-D arrays** @@ -325,6 +855,10 @@ test() | Numba | 0.21 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | LPython | 0.31 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.48 | +
+ + + | Compiler | Execution Time (s) | System | Relative | |---|---|---|---| | Numba | 0.041 | Apple M1 MBP 2020 | 1.00 | @@ -336,6 +870,9 @@ test() | Numba | 0.21 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | LPython | 0.21 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | +
+ + **Insertion sort on lists** @@ -405,6 +942,10 @@ test() | Numba | 0.35 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | LPython | 0.37 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.06 | +
+ + + | Compiler | Execution Time (s) | System | Relative | |---|---|---|---| | LPython | 0.11 | Apple M1 MBP 2020 | 1.00 | @@ -416,6 +957,9 @@ test() | LPython | 0.10 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | Numba | 0.36 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 3.60 | +
+ + **Quadratic-time implementation of the Dijkstra shortest-path algorithm on a fully connected graph** @@ -538,6 +1082,10 @@ test() | LPython | 1.08 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | Numba | 1.69 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.56 | +
+ + + | Compiler | Execution Time (s) | System | Relative | |---|---|---|---| | LPython | 0.23 | Apple M1 MBP 2020 | 1.00 | @@ -549,6 +1097,9 @@ test() | LPython | 0.87 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 1.00 | | Numba | 1.95 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 2.24 | +
+ + ### Ahead-of-Time (AoT) Compilation @@ -557,7 +1108,17 @@ Next, we see how LPython compares to other AoT compilers and to the standard CPy **System Information** -The Clang++ version used is `14.0.3`, `g++` version is `11.3.0`, LPython commit is `a39430386a0e7ea5de2f569e27229017dff78330` and Python version is `Python 3.10.4`. + +| Compiler | Version | +|---|---| +| clang++ | 14.0.3 | +| g++ | 11.3.0 | +| LPython | 0.19.0 | +| Python | 3.10.4 | + +
+ + **Quadratic-time implementation of the Dijkstra shortest-path algorithm on a fully connected graph** @@ -697,6 +1258,10 @@ int main() { | g++ | 1.358 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 2.21 | | Python | 7.365 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 12.01 | +
+ + + Note the optimization flags furnished to each compiler. | Compiler/Interpreter | Optimization flags used | @@ -706,6 +1271,10 @@ Note the optimization flags furnished to each compiler. | g++ | `-ffast-math -funroll-loops -O3`| | Python | - | +
+ + + **Floyd-Warshall algorithm on array representation of graphs** @@ -806,6 +1375,8 @@ int main() { | LPython | 2.933 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 4.22 | | Python | 440.588 | AMD Ryzen 5 2500U (Ubuntu 22.04) | 633.94 | +
+ @@ -818,6 +1389,9 @@ Note the optimization flags furnished to each compiler. | g++ | `-ffast-math -funroll-loops -O3`| | Python | - | +
+ + ### Interoperability with CPython @@ -951,8 +1525,7 @@ def plot_graph(x, y1, y2, y3): (lp) 23:10:44:~/lpython_project % # Works see the graph below ``` - -![Output graph](https://hackmd.io/_uploads/r1yOs6292.png) +![Output graph](https://lpython.org/blog/images/graph.png) **Visualization using Matplotlib: Mandelbrot Set** @@ -1050,14 +1623,14 @@ $ lpython main.py --backend=c --link-numpy Done. ``` -![mandelbrot-set-gray](https://hackmd.io/_uploads/r1h-wCpch.png) +![mandelbrot-set-gray](https://lpython.org/blog/images/gray.png) -![mandelbrot-set-color](https://hackmd.io/_uploads/ByXPEATqn.png) +![mandelbrot-set-color](https://lpython.org/blog/images/color.png) ## Conclusion -The benchmarks support the claim that LPython is competitive with its competitors in all features it offers. In JIT, the execution times of LPython-compiled functions are at least as short as equivalent Numba functions.The speed of JIT compilation, itself, is slow in some cases because it depends on a C compiler to generate optimal binary code. For algorithms with rich data structures like `dict` (hash maps) and `list`, LPython shows much faster speed than Numba. In AoT compilation for tasks like the Dijkstra algorithm, LPython beats equivalent C++ code very comfortably. For an array-based implementation of the Floyd-Warshall algorithm, LPython generates code almost as fast as doess C++. +The benchmarks support the claim that LPython is competitive with its competitors in all features it offers. In JIT, the execution times of LPython-compiled functions are at least as short as equivalent Numba functions. The speed of JIT compilation, itself, is slow in some cases because it currently depends on a C compiler to generate optimal binary code. For algorithms with rich data structures like `dict` (hash maps) and `list`, LPython shows much faster speed than Numba. In AoT compilation for tasks like the Dijkstra algorithm, LPython beats equivalent C++ code very comfortably. For an array-based implementation of the Floyd-Warshall algorithm, LPython generates code almost as fast as C++ does. -The main takeaway is that LPython/LFortran generate fast code by default. Our benchmarks show that it's straightforward to write high-speed LPython code. We hope to raise expectations that LPython output will be in general at least as fast as the equivalent C++ code. Users love Python because of its many productivity advantages: great tooling, easy syntax, and rich data structures like lists, dicts, sets, and arrays. Because any LPython program is also an ordinary Python program, all the tools -- debuggers and profilers, for instance -- just work. Then, LPython delivers run-time speeds, even with rich data structures at least as short as alternatives in most cases. In the future, LPython will allow user-defined implementations of data structures for those rare cases where the versions shipped with LPython are not good enough. +The main takeaway is that LPython/LFortran generate fast code by default. Our benchmarks show that it's straightforward to write high-speed LPython code. We hope to raise expectations that LPython output will be in general at least as fast as the equivalent C++ code. Users love Python because of its many productivity advantages: great tooling, easy syntax, and rich data structures like lists, dicts, sets, and arrays. Because any LPython program is also an ordinary Python program, all the tools -- debuggers and profilers, for instance -- just work. Then, LPython delivers run-time speeds, even with rich data structures at least as short as alternatives in most cases. diff --git a/content/compilers_list.py b/content/compilers_list.py new file mode 100644 index 0000000..124a64c --- /dev/null +++ b/content/compilers_list.py @@ -0,0 +1,58 @@ +import requests + +def get_total_stars(name): + # Caution: This works for only 1 loop iteration, + # after that GitHub doesn't respond for a while + url = f"https://api.github.com/repos/{name}" + response = requests.get(url) + + if response.status_code == 200: + repo_data = response.json() + total_stars = repo_data['stargazers_count'] + return total_stars + else: + print(f"Error {response.status_code}: " + f"Unable to fetch data for repository {name}") + return None + +# Data as on 2023-07-28 +# Recent commits example +# https://github.com/cupy/cupy?from=2022-07-28&to=2023-07-28&type=c + +compilers_list = { + # Name : [Total Contributors, Recent Contributors, Total stars] + "pytorch/pytorch" : [2857, 75, 69253], # 15 < 10 commits + "pyston/pyston" : [1263, 2, 2426], + "google/jax" : [ 523, 60, 24010], # 37 < 10 commits + "cython/cython" : [ 435, 18, 8168], + "numba/numba" : [ 306, 25, 8790], + "cupy/cupy" : [ 286, 15, 7062], + "taichi-dev/taichi" : [ 224, 44, 23503], + "Nuitka/Nuitka" : [ 138, 36, 9385], # Except 1, all others < 10 commits, + # (Most of them (27) are 1 commit) + "serge-sans-paille/pythran" : [ 58, 9, 1912], + "pypy/pypy.org" : [ 36, 5, 21], # Website + "weld-project/weld" : [ 35, 0, 2945], + "lcompilers/lpython" : [ 34, 28, 141], + "IronLanguages/ironpython3" : [ 33, 5, 2179], + "pyccel/pyccel" : [ 32, 17, 279], # 15 < 10 commits + "pyjs/pyjs" : [ 30, 0, 1123], + "google/grumpy" : [ 29, 0, 10580], # Archived on Mar 23, 2023 + "Quansight-Labs/uarray" : [ 22, 1, 98], + "shedskin/shedskin" : [ 20, 7, 701], + "jython/jython" : [ 18, 4, 897], + "seq-lang/seq" : [ 9, 0, 680], # Archived on Dec 8, 2022. + "jakeret/hope" : [ 6, 0, 385], + "fluiddyn/transonic" : [ 3, 1, 105], +} + +# To update GitHub stars +# Caution: `get_total_stars`` works only once, +# after that GitHub doesn't respond for a while. +# Error: API rate limit exceeded for "ip_address" +# Solution: change your IP (network) +# for i in compilers_list: +# compilers_list[i][2] = get_total_stars(i) +# print("https://github.com/" + i) + +# pprint(compilers_list) diff --git a/content/index_intro/index.md b/content/index_intro/index.md index 9fe0b2b..a6adaf2 100644 --- a/content/index_intro/index.md +++ b/content/index_intro/index.md @@ -3,10 +3,14 @@ headless: true date: 2023-07-28 --- -LPython is a Python compiler that aims to provide optimized machine code by compiling type-annotated Python code. It offers several backends, including LLVM, C, C++, and WASM, which allow it to generate code into multiple target languages simultaneously. LPython's main focus is on speed and performance, and it achieves this through various features and optimizations. +LPython aggressively optimizes type-annotated Python code. It has several +backends, including LLVM, C, C++, and WASM. LPython’s primary tenet is speed. -LPython is still in development (alpha stage) and may evolve further to encompass more extensive Python code and additional optimizations. -[LPython: Making Python faster with LLVM](/blog/2023/05/lpython-making-python-faster-with-llvm/). +LPython is in alpha stage (meaning users enthusiastically participate in bug +reporting and fixing). LPython will compile more of Python in the future, and +accumulate more optimizations, experimental and production-ready. LPython makes +it easy to write new back-ends for custom, exotic, or unusual hardware. +Release blog post: [LPython: Novel, Fast, Retargetable Python Compiler](https://lpython.org/blog/2023/07/lpython-novel-fast-retargetable-python-compiler/). Main repository at GitHub: [https://github.com/lcompilers/lpython](https://github.com/lcompilers/lpython) @@ -14,5 +18,4 @@ Main repository at GitHub: Try LPython in your browser using WebAssembly: https://dev.lpython.org/ -Twitter: [@lfortranorg](https://twitter.com/lfortranorg) Any questions? Ask us on Zulip [![project chat](https://img.shields.io/badge/zulip-join_chat-brightgreen.svg)](https://lfortran.zulipchat.com/#narrow/stream/311866-LPython). diff --git a/layouts/download/downloadlayout.html b/layouts/download/downloadlayout.html index 978f2ff..3bf7088 100644 --- a/layouts/download/downloadlayout.html +++ b/layouts/download/downloadlayout.html @@ -12,14 +12,14 @@

Binaries

in the Documentation. -{{ $tarballs := getJSON "https://raw.githubusercontent.com/lfortran/tarballs/master/docs/data.json" }} +{{ $lpython_releases := getJSON "https://api.github.com/repos/lcompilers/lpython/releases" }}

Releases

Latest release:
    - {{ range first 1 (sort $tarballs.release ".created" "desc") }} + {{ range first 1 ($lpython_releases) }}
  • - lfortran-{{ .version }}.tar.gz - ({{ dateFormat "Jan 2, 2006" .created }}) + lpython-{{ .tag_name }} + ({{ dateFormat "Jan 2, 2006" .published_at }})
  • {{ end }}
@@ -30,24 +30,14 @@

Releases

-

Development Version

- Latest development version in master: -
    - {{ range first 1 (sort $tarballs.dev ".created" "desc") }} -
  • - lfortran-{{ .version }}.tar.gz - ({{ dateFormat "Jan 2, 2006" .created }}) -
  • - {{ end }} -
{{ .Content }}