227 Commits
1.3.2 ... 1.4.1

Author SHA1 Message Date
Oscar Krause
699dbf6fac Merge branch '1-parsing-issue-in-mal-formatted-mac_address_list' into 'main'
Resolve "Parsing issue in mal formatted "mac_address_list""

Closes #1

See merge request oscar.krause/fastapi-dls!40
2024-11-21 09:18:22 +01:00
Oscar Krause
317699ff58 code styling 2024-11-21 08:51:39 +01:00
Oscar Krause
55446f7d9c fixes 2024-11-21 08:51:39 +01:00
Oscar Krause
88c78efcd9 fixes 2024-11-21 08:51:39 +01:00
Oscar Krause
fb3ac4291f code styling 2024-11-21 08:51:39 +01:00
Oscar Krause
15f14cac11 implemented "SUPPORT_MALFORMED_JSON" variable 2024-11-21 08:51:39 +01:00
Oscar Krause
018d7c34fc fixes 2024-11-21 08:51:39 +01:00
Oscar Krause
1aee423120 fixes 2024-11-21 08:51:39 +01:00
Oscar Krause
a6b2f2a942 fixed json payload 2024-11-21 08:51:39 +01:00
Oscar Krause
e33024db86 fixed variable names
ref. oscar.krause/fastapi-dls#1
2024-11-21 08:51:39 +01:00
Oscar Krause
4ad15f0849 fix malformed json on auth
ref. oscar.krause/fastapi-dls#1
2024-11-21 08:51:39 +01:00
Oscar Krause
7bad0359af updated ci pipeline to match current eol supported systems 2024-11-21 08:44:14 +01:00
Oscar Krause
59a7c9f15a Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!38
2024-11-13 16:11:40 +01:00
Oscar Krause
bc6d692f0a added "delete_expired" method for leases 2024-11-13 15:03:37 +01:00
Oscar Krause
63c37c6334 fixed timezone in json response 2024-11-13 15:03:12 +01:00
Oscar Krause
fa2c06972e sql query improvements 2024-11-13 15:01:33 +01:00
Oscar Krause
e4e6387b2a ci improvements 2024-11-13 14:58:55 +01:00
Oscar Krause
f2be9dca8d Merge branch 'dev' into 'main'
requirements.txt updated

See merge request oscar.krause/fastapi-dls!36
2024-11-13 14:09:54 +01:00
Oscar Krause
52dd425583 fixes 2024-11-13 13:41:07 +01:00
Oscar Krause
286399d79a fixed test matrix 2024-11-13 10:48:11 +01:00
Oscar Krause
4ab1a2ed22 added requirements for ubuntu 24.10 2024-11-13 10:28:08 +01:00
Oscar Krause
459c0e21af debugging 2024-11-13 10:27:52 +01:00
Oscar Krause
98ef64211b typings 2024-11-13 09:09:00 +01:00
Oscar Krause
0b4bb65546 added python3-pip to test 2024-11-13 08:55:00 +01:00
Oscar Krause
47624f5019 Dockerfile - updated db dependencies 2024-11-13 08:37:07 +01:00
Oscar Krause
2b9d7821c0 improved gitlab test matrix 2024-11-13 08:33:28 +01:00
Oscar Krause
45f5108717 requirements.txt updated 2024-11-13 08:25:40 +01:00
Oscar Krause
a7fe8b867e Merge branch 'dev' into 'main'
added way to include driver version in api

See merge request oscar.krause/fastapi-dls!35
2024-10-24 13:28:08 +02:00
Oscar Krause
78214df9cc updated to python 3.12 2024-10-24 10:44:31 +02:00
Oscar Krause
4245d5a582 requirements.txt updated 2024-10-24 08:09:30 +02:00
Oscar Krause
9b5a387169 updated support matrix 2024-10-24 08:09:24 +02:00
Oscar Krause
9377d5ce28 requirements.txt updated 2024-10-08 14:33:44 +02:00
Oscar Krause
7489307db8 README.md updated 2024-08-09 13:15:16 +02:00
Oscar Krause
d41314e81d requirements.txt updated 2024-08-09 13:14:53 +02:00
Oscar Krause
a1123d5451 updated support matrix (removed EOL) 2024-07-24 05:35:22 +02:00
Oscar Krause
93cf719454 updated support matrix 2024-07-24 05:35:09 +02:00
Oscar Krause
0dc8f6c582 refactorings 2024-07-11 05:49:13 +02:00
Oscar Krause
4b0219b85a updated to new vgpu page
ref. https://docs.nvidia.com/vgpu/index.html
2024-07-11 05:49:00 +02:00
Oscar Krause
8edbb25c16 README updated 2024-06-27 08:49:31 +02:00
Oscar Krause
49a24f0b68 README updated 2024-06-27 08:47:35 +02:00
Oscar Krause
8af3c8e2b3 README updated 2024-06-27 08:47:04 +02:00
Oscar Krause
3c321a202c README updated 2024-06-27 08:45:25 +02:00
Oscar Krause
1b7d8bc0dc README reorganized 2024-06-27 08:37:21 +02:00
Oscar Krause
23ccea538f README reorganized 2024-06-27 08:32:15 +02:00
Oscar Krause
c79455b84d added way to include driver version in api
use `create_driver_matrix_json.py` to generate file in `static/driver_matrix.json`. Logging currently is disabled to not confuse users when file is missing. This is optional!
2024-06-21 19:01:33 +02:00
Oscar Krause
35fc5ea6b0 set log format 2024-06-21 18:59:23 +02:00
Oscar Krause
6a54c05fbb Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!34
2024-06-18 13:56:30 +02:00
Oscar Krause
c9ac915055 code styling & comments 2024-06-13 20:34:45 +02:00
Oscar Krause
8c5850beda migrated from deprecated "startup" to "lifespan" hook (fastapi) 2024-06-13 20:34:27 +02:00
Oscar Krause
0d9e814d0d show if debug is enabled on app startup 2024-06-13 20:18:33 +02:00
Oscar Krause
5438317fb7 rearranged imports 2024-06-13 20:18:18 +02:00
Oscar Krause
21f19be8ab code styling & improved logging 2024-06-13 20:16:51 +02:00
Oscar Krause
eff6aae25d code styling 2024-06-13 19:53:57 +02:00
Oscar Krause
6473655e57 import fixes 2024-06-13 19:24:46 +02:00
Oscar Krause
c45aa1a2a8 fixed [[__TOC__]] to [TOC] to support "markdown" package 2024-06-12 21:27:56 +02:00
Oscar Krause
1d0631417d added link to gpu support matrix 2024-06-12 19:44:37 +02:00
Oscar Krause
847d3589c5 typos 2024-06-07 08:42:41 +02:00
Oscar Krause
ca53a4e084 updated supported vGPU releases 2024-06-07 08:42:37 +02:00
Oscar Krause
006d3a1833 Merge branch 'dev' into 'main'
Added Ubuntu 24.04 support & updated requirements

See merge request oscar.krause/fastapi-dls!33
2024-05-10 10:52:00 +02:00
Oscar Krause
ad3b622c23 requirements.txt updated 2024-05-10 10:09:28 +02:00
Oscar Krause
e51d6bd391 added Ubuntu 24.04 as supported 2024-05-10 09:16:20 +02:00
Oscar Krause
78c1978dd5 added test matrix for python3.12 2024-05-10 09:15:46 +02:00
Oscar Krause
4ebb4d790e added ubuntu-24.04 "requirements-ubuntu-24.04.txt" 2024-05-10 08:22:47 +02:00
Oscar Krause
11f1456538 test image matrix 2024-05-10 07:56:13 +02:00
Oscar Krause
be6797efc7 requirements.txt updated 2024-04-23 07:43:39 +02:00
Oscar Krause
42fe066e1a Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!32
2024-04-18 07:38:31 +02:00
Oscar Krause
9eb91cbe1a link to proxmox-installer.sh and credits 2024-04-18 07:07:57 +02:00
Oscar Krause
395884f643 credits & further reading 2024-04-10 07:09:01 +02:00
Oscar Krause
ef542ec821 Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!31
2024-04-09 10:28:57 +02:00
Oscar Krause
254e4ee08c requirements.txt updated 2024-04-09 08:13:33 +02:00
Oscar Krause
07273c3ebd update vGPU Version Matrix 2024-04-09 08:01:20 +02:00
Oscar Krause
e04723d128 removed biggerthanshit link 2024-04-09 08:01:01 +02:00
Oscar Krause
8f498f4960 added 17.1 as supported 2024-04-08 15:10:01 +02:00
Oscar Krause
dd69f60fd0 added link to releases & release notes 2024-03-06 20:58:51 +01:00
Oscar Krause
a5d599a52c typos 2024-03-04 21:31:56 +01:00
Oscar Krause
66d203e72a requirements.txt updated 2024-03-04 21:13:12 +01:00
Oscar Krause
7800bf73a8 added "16.4" and "17.0" as supported 2024-02-27 15:44:34 +01:00
Oscar Krause
5b39598487 Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!30
2024-02-27 08:20:43 +01:00
Oscar Krause
ed59260a10 added "16.3" support 2024-02-26 20:53:47 +01:00
Oscar Krause
7c70d121be requirements.txt updated 2024-02-26 20:53:33 +01:00
Oscar Krause
213e768708 removed todo 2024-01-18 17:02:09 +01:00
Oscar Krause
0696900d67 fixes 2024-01-18 16:58:33 +01:00
Oscar Krause
4fb90a22e3 make tests interruptible 2024-01-18 13:10:12 +01:00
Oscar Krause
6aa197dcae only run test matrix when "app" or "test" changes 2024-01-18 13:09:30 +01:00
Oscar Krause
46f6c9fe99 fixed CI/CD path from "/builds" to "/tmp/builds" 2024-01-18 13:06:45 +01:00
Oscar Krause
2baaeb561b run different jobs on "$CI_DEFAULT_BRANCH" 2024-01-18 12:59:06 +01:00
Oscar Krause
867cd7018a removed pylint 2024-01-18 12:58:43 +01:00
Oscar Krause
9c686913dd disabled pylint 2024-01-18 12:46:51 +01:00
Oscar Krause
d3c4dc3fb7 disabled code_quality debug 2024-01-18 08:34:43 +01:00
Oscar Krause
af8b1c2387 Update .codeclimate.yml 2024-01-17 23:13:20 +01:00
Oscar Krause
d37d96dc34 fixed test_coverage (fail on matrix) 2024-01-17 23:05:57 +01:00
Oscar Krause
21d052523f added code_quality debug 2024-01-17 22:43:47 +01:00
Oscar Krause
22110df791 added code_quality “SOURCE_CODE” variable 2024-01-17 22:37:33 +01:00
Oscar Krause
c7f354d50c removed "cython" from "test" 2024-01-17 11:33:22 +01:00
Oscar Krause
3bdfc94527 removed tests for "23.04"
> gcc -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/tmp/pip-install-sazb8fvo/httptools_694f06fa2e354ed9ba9f5c167df7fce4/vendor/llhttp/include -I/tmp/pip-install-sazb8fvo/httptools_694f06fa2e354ed9ba9f5c167df7fce4/vendor/llhttp/src -I/usr/local/include/python3.11 -c httptools/parser/parser.c -o build/temp.linux-x86_64-cpython-311/httptools/parser/parser.o -O2
      httptools/parser/parser.c:212:12: fatal error: longintrepr.h: No such file or directory
2024-01-17 09:33:58 +01:00
Oscar Krause
9473f10653 added tests for Ubuntu "Mantic Minotaur" 2024-01-17 08:08:37 +01:00
Oscar Krause
e9ad1d7791 requirements.txt updated 2024-01-12 14:53:17 +01:00
Oscar Krause
f97ee9c8fc updated debian bookworm 12 dependencies 2024-01-12 14:25:03 +01:00
Oscar Krause
236948e483 updated test to debian bookworm 2023-11-03 14:03:48 +01:00
Oscar Krause
948934ad0e fixed testing dependency 2023-11-03 12:53:50 +01:00
Oscar Krause
3ef14e5522 added gcc as dependency 2023-11-03 11:41:44 +01:00
Oscar Krause
ee50ede2ea fixes 2023-11-03 10:49:06 +01:00
Oscar Krause
b11579de98 fixed debian package versions 2023-11-03 09:28:23 +01:00
Oscar Krause
dc33c29158 fixed versions & added 16.2 as supported 2023-11-03 08:23:07 +01:00
Oscar Krause
6f9107087b added os specific requirements.txt 2023-10-25 07:36:17 +02:00
Oscar Krause
01fd954252 implemented python test matrix for different python dependencies on different os releases 2023-10-25 07:31:29 +02:00
Oscar Krause
995dbdac80 README.md updated 2023-10-25 07:30:57 +02:00
Oscar Krause
65de4d0534 Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!29
2023-10-16 10:27:49 +02:00
Oscar Krause
51b28dcdc3 updated ubuntu from 22.10 (EOL) to 23.04 2023-10-16 09:50:24 +02:00
Oscar Krause
9512e29ed9 requirements.txt updated 2023-09-26 07:09:06 +02:00
Oscar Krause
713e33eed1 added 16.1 as supported nvidia driver release 2023-09-26 07:08:58 +02:00
Oscar Krause
4b16b02a7d added macOS as supported host (using python-venv) 2023-09-26 07:08:41 +02:00
Oscar Krause
3e9d7c0061 added Docker supported system architectures 2023-09-26 07:08:12 +02:00
Oscar Krause
7480cb4cf7 added linkt to driver compatibility section 2023-07-13 06:46:27 +02:00
Oscar Krause
58ffa752f3 Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!28
2023-07-10 19:11:28 +02:00
Oscar Krause
2d7909546d requirements.txt updated 2023-07-10 18:47:46 +02:00
Oscar Krause
fec099ae81 added support for 16.0 drivers to readme 2023-07-10 13:32:32 +02:00
Oscar Krause
fd4fa84dc5 fixed docker image name (gitlab registry) 2023-07-04 19:39:06 +02:00
Oscar Krause
5ff3295658 fixed deploy docker 2023-07-04 18:58:13 +02:00
Oscar Krause
ca38ebe3fd Merge branch 'dev' into 'main'
Multiarch to DockerHub

See merge request oscar.krause/fastapi-dls!27
2023-07-04 18:47:45 +02:00
Oscar Krause
df5cb3c9c3 Merge branch 'main' into 'dev'
# Conflicts:
#   .gitlab-ci.yml
2023-07-04 16:19:49 +00:00
Oscar Krause
eca64fb1d5 push multiarch image to docker-hub 2023-07-04 17:47:10 +02:00
Oscar Krause
7ae1201c8f fixed new docker registry image path 2023-07-04 13:43:15 +02:00
Oscar Krause
a4e98dae46 fixed docker image path 2023-07-04 13:42:21 +02:00
Oscar Krause
d4267f3ee6 toggle api endpoints 2023-07-04 12:42:31 +02:00
Oscar Krause
c02ca762ea typos 2023-07-04 12:42:19 +02:00
Oscar Krause
10caf2310c added information about ipv6 may be must disabled 2023-07-04 12:39:13 +02:00
Oscar Krause
7380e4328e removed mysql from included docker drivers 2023-07-04 12:38:54 +02:00
Oscar Krause
c1eaa33d9e added docker command to logging section
thanks to @libreshare (https://gitea.publichub.eu/oscar.krause/fastapi-dls/issues/2)
2023-07-04 12:22:22 +02:00
Oscar Krause
45545953ed improvements
thanks to @AbsolutelyFree (https://gitea.publichub.eu/oscar.krause/fastapi-dls/issues/1)
2023-07-04 12:19:07 +02:00
Oscar Krause
4c8c2ed3d6 fixed "deploy:pacman" 2023-07-04 11:55:26 +02:00
Oscar Krause
6483af4ba9 Merge branch 'dev' into 'main'
Dev

See merge request oscar.krause/fastapi-dls!26
2023-07-04 11:39:37 +02:00
Oscar Krause
e6595c05d5 fixed mariadb-client installation
ref. https://github.com/PyMySQL/mysqlclient/discussions/624
2023-07-04 11:06:00 +02:00
Oscar Krause
fb1dbea1ee added missing "pkg-config" for "mysqlclient==2.2.0"
ref. https://stackoverflow.com/questions/76533384/docker-alpine-build-fails-on-mysqlclient-installation-with-error-exception-can
2023-07-04 10:50:35 +02:00
Oscar Krause
f576ded038 fixed versions 2023-07-04 10:33:30 +02:00
Oscar Krause
54eaf55ee8 refactored docker-compose.yml so very simple example, and moved proxy to "examples" directory 2023-07-04 10:24:11 +02:00
Oscar Krause
3119d2c7ea added 15.3 to supported drivers list 2023-07-04 10:18:07 +02:00
Oscar Krause
e40f4ce41f updated compatibility list 2023-07-04 10:17:45 +02:00
Oscar Krause
576f22333e docker-compose.yml - added note for TZ 2023-07-04 10:17:34 +02:00
Oscar Krause
0f53436700 requirements.txt updated 2023-07-04 10:17:12 +02:00
Oscar Krause
c79636b1c2 fixed .gitlab-ci.yml deprecated build-ref varialbes
ref. https://gitlab.com/gitlab-org/gitlab/-/issues/352957
2023-06-12 11:03:03 +02:00
Oscar Krause
8de9a89e56 Merge branch 'main' into 'dev'
# Conflicts:
#   requirements.txt
2023-06-12 08:56:01 +00:00
Oscar Krause
801d1786ef requirements.txt updated 2023-06-12 10:52:13 +02:00
Oscar Krause
7e5f8b6c8a implemented endpoint to remove expired leases 2023-06-12 10:48:00 +02:00
Oscar Krause
98da86fc2e removed debian bookworm testing notes 2023-06-12 09:31:43 +02:00
Oscar Krause
14cf6a953f typos 2023-05-09 06:58:51 +02:00
Oscar Krause
6a5d3cb2f7 requirements.txt updated 2023-05-09 06:57:17 +02:00
Oscar Krause
774a1c21a1 improved docker build with "ARG" instead of using "version.env" which is not present on local builds (because it's created by ci-pipeline) 2023-05-09 06:57:03 +02:00
Oscar Krause
d1a77df0e1 updated sudo / su commands (list sudo fist instead of su) 2023-04-17 10:58:25 +02:00
Oscar Krause
c9c73f6cf2 updated docker image requirements.txt 2023-04-17 10:35:23 +02:00
Oscar Krause
b216dcb3dd fixed nvidia-smi path on windows 2023-04-17 10:35:08 +02:00
Oscar Krause
d2e4042932 added 15.2 to supported versions 2023-04-01 23:20:06 +02:00
Oscar Krause
04a1ee0948 added Roadmap 2023-03-24 14:28:44 +01:00
Oscar Krause
c1b5f83f44 Merge branch 'multiarch' into 'main'
multiarch support

See merge request oscar.krause/fastapi-dls!25
2023-03-24 10:29:23 +01:00
Oscar Krause
9d1422cbdf secret detection 2023-03-24 10:00:25 +01:00
Oscar Krause
7b7f14bd82 Aktualisieren .gitlab-ci.yml 2023-03-24 09:31:27 +01:00
Oscar Krause
f72c0f7db3 Aktualisieren .gitlab-ci.yml 2023-03-24 09:11:26 +01:00
Oscar Krause
76d8753f28 Aktualisieren .gitlab-ci.yml 2023-03-24 09:07:49 +01:00
Oscar Krause
593db0e789 Aktualisieren .gitlab-ci.yml 2023-03-24 08:43:29 +01:00
Oscar Krause
3d9e3cb88f set specific arm64 version to v8 2023-03-24 07:48:35 +01:00
Oscar Krause
995b944135 removed "linux/arm/v7" 2023-03-23 11:46:17 +01:00
Oscar Krause
e200c84345 improvements 2023-03-23 11:10:53 +01:00
Oscar Krause
04ff36c94d Aktualisieren .gitlab-ci.yml 2023-03-23 08:36:34 +01:00
Oscar Krause
89704bc2a1 Aktualisieren .gitlab-ci.yml 2023-03-23 08:35:27 +01:00
Oscar Krause
6395214fa0 Aktualisieren .gitlab-ci.yml 2023-03-23 08:24:40 +01:00
Oscar Krause
c8e000eb3e Aktualisieren .gitlab-ci.yml 2023-03-23 08:22:51 +01:00
Oscar Krause
c8e5676c01 Aktualisieren .gitlab-ci.yml 2023-03-23 08:17:31 +01:00
Oscar Krause
6f11bc414c Aktualisieren .gitlab-ci.yml 2023-03-23 08:11:57 +01:00
Oscar Krause
1fc5ac8378 added setup_vgpu_license.sh script 2023-03-22 07:10:18 +01:00
Oscar Krause
87334fbfad added unraid section 2023-03-20 20:21:17 +01:00
Oscar Krause
0fac033657 Merge branch 'dev' into 'main'
dev

See merge request oscar.krause/fastapi-dls!24
2023-03-20 15:01:22 +01:00
Oscar Krause
7cd4e6fde0 fixes 2023-03-20 14:51:54 +01:00
Oscar Krause
a22b56edbe fixes 2023-03-20 14:33:50 +01:00
Oscar Krause
e42dc6aa86 code styling 2023-03-20 10:06:21 +01:00
Oscar Krause
86f703a36c ci improvements 2023-03-20 08:35:06 +01:00
Oscar Krause
71795cc7a2 di improvements 2023-03-20 08:07:24 +01:00
Oscar Krause
4ef041bb54 styling 2023-02-28 13:08:34 +01:00
Oscar Krause
88c8fb98da added some notes about included database drivers in docker image 2023-02-28 07:52:14 +01:00
Oscar Krause
a7b4a4b631 requirements.txt updated 2023-02-14 16:00:04 +01:00
Oscar Krause
7ccb254cbf dependency scanning 2023-02-14 15:48:49 +01:00
Oscar Krause
1d5d3b31fb dependency scanning 2023-02-14 15:32:32 +01:00
Oscar Krause
7af2e02627 improvements 2023-02-14 15:14:19 +01:00
Oscar Krause
938fc6bd60 added SAST 2023-02-14 14:50:21 +01:00
Oscar Krause
1b9ebb48b1 added secret detection 2023-02-14 13:55:49 +01:00
Oscar Krause
4972f00822 fixes 2023-02-14 13:37:25 +01:00
Oscar Krause
210a36c07f added code-quality and test-coverage 2023-02-14 12:59:31 +01:00
Oscar Krause
e1bbd42b50 Merge branch 'dev' into 'main'
1.3.5

See merge request oscar.krause/fastapi-dls!23
2023-02-13 17:30:11 +01:00
Oscar Krause
c1d541f7c6 bump version to 1.3.5 2023-02-13 08:09:37 +01:00
Oscar Krause
4b58fe6e20 added openSUSE Leap 15.4 support 2023-02-13 08:09:21 +01:00
Oscar Krause
b36b49df11 fixed missing mkdir for config file on manual installation method 2023-02-01 07:55:12 +01:00
Oscar Krause
a42b1c8cfb added note to be logged in as root using manual install method (git) 2023-01-30 12:34:46 +01:00
Oscar Krause
59152f95e6 fixed - The `declarative_base()` function is now available as sqlalchemy.orm.declarative_base() 2023-01-30 10:23:09 +01:00
Oscar Krause
62d347510d fixed - sqlalchemy.exc.ArgumentError: Textual SQL expression '\nCREATE TABLE origin (\n\to...' should be explicitly declared as text('\nCREATE TABLE origin (\n\to...') 2023-01-30 10:22:18 +01:00
Oscar Krause
f540c4b25b requirements.txt updated 2023-01-30 09:19:03 +01:00
Oscar Krause
70212e0edd improved docker-compose examples 2023-01-30 09:18:57 +01:00
Oscar Krause
616e8fba5e README - improvements 2023-01-30 08:37:34 +01:00
Oscar Krause
b905ab9dd9 Merge branch 'dev' into 'main'
1.3.4

See merge request oscar.krause/fastapi-dls!22
2023-01-26 07:56:34 +01:00
Oscar Krause
9edc93653e bump version to 1.3.4 2023-01-26 07:38:48 +01:00
Oscar Krause
f30e9237a5 requirements.txt updated 2023-01-26 07:18:01 +01:00
Oscar Krause
f12dc28c42 fixed overriding config file on update / reinstall 2023-01-26 07:17:16 +01:00
Oscar Krause
02276d5440 disabled openapi endpoints 2023-01-23 07:33:54 +01:00
Oscar Krause
9ebff8d6ca typos 2023-01-23 07:29:13 +01:00
Oscar Krause
48eb6d6c64 typos 2023-01-23 07:23:42 +01:00
Oscar Krause
f7ef8d76b6 fixed Origin.delete() 2023-01-23 07:12:02 +01:00
Oscar Krause
bed24b56ce styling 2023-01-19 08:26:35 +01:00
Oscar Krause
95427d430e added startup script 2023-01-19 07:26:22 +01:00
Oscar Krause
c3ea0aa48c added variable for client-token-expire-delta 2023-01-19 07:26:07 +01:00
Oscar Krause
91be7b226c added some comments for default values 2023-01-19 07:25:44 +01:00
Oscar Krause
7045692958 added official links 2023-01-19 07:25:24 +01:00
Oscar Krause
38177fa259 styling 2023-01-18 14:29:48 +01:00
Oscar Krause
9411759f6d added system requirements and preparements 2023-01-18 14:23:34 +01:00
Oscar Krause
48c37987b2 fixed logging and added current timezone info 2023-01-18 14:23:25 +01:00
Oscar Krause
e3745d7fa8 Merge branch 'dev' into 'main'
1.3.3

See merge request oscar.krause/fastapi-dls!21
2023-01-18 08:13:42 +01:00
Oscar Krause
5bb8f17679 improvements 2023-01-18 08:07:55 +01:00
Oscar Krause
de17b0f1b5 fixes 2023-01-18 08:03:02 +01:00
Oscar Krause
0ab5969d3a fixes 2023-01-18 06:56:16 +01:00
Oscar Krause
059a51fe74 refactored commands 2023-01-17 17:25:48 +01:00
Oscar Krause
bf858b38f4 fixes 2023-01-17 17:09:13 +01:00
Oscar Krause
f60f08d543 run powershell as administrator 2023-01-17 16:57:15 +01:00
Oscar Krause
b2e6fab294 fixes 2023-01-17 16:49:15 +01:00
Oscar Krause
b09bb091a5 bump version to 1.3.3 2023-01-17 16:29:32 +01:00
Oscar Krause
651af4cc82 fixed client-token url and added wget als alternative to curl 2023-01-17 16:29:21 +01:00
Oscar Krause
70f7d3f483 mark Let's Encrypt section as optional 2023-01-17 15:36:38 +01:00
Oscar Krause
1e4070a1ba added remove "/usr/share/fastapi-dls" to "postrm" 2023-01-17 14:57:54 +01:00
Oscar Krause
d69d833923 migrated "[[ ]]" if statements to "[ ]" 2023-01-17 14:57:39 +01:00
Oscar Krause
7ef071f92b removed fastapi-dls.service from conffiles 2023-01-17 14:57:09 +01:00
Oscar Krause
3c19fc9d5b implemented "lease_renewal" attribute as calculated value within what period of time the license must be renewed 2023-01-17 11:49:56 +01:00
25 changed files with 1357 additions and 290 deletions

View File

@@ -1,2 +1 @@
/etc/fastapi-dls/env /etc/fastapi-dls/env
/etc/systemd/system/fastapi-dls.service

View File

@@ -3,7 +3,7 @@
WORKING_DIR=/usr/share/fastapi-dls WORKING_DIR=/usr/share/fastapi-dls
CONFIG_DIR=/etc/fastapi-dls CONFIG_DIR=/etc/fastapi-dls
if [[ ! -f $CONFIG_DIR/instance.private.pem ]]; then if [ ! -f $CONFIG_DIR/instance.private.pem ]; then
echo "> Create dls-instance keypair ..." echo "> Create dls-instance keypair ..."
openssl genrsa -out $CONFIG_DIR/instance.private.pem 2048 openssl genrsa -out $CONFIG_DIR/instance.private.pem 2048
openssl rsa -in $CONFIG_DIR/instance.private.pem -outform PEM -pubout -out $CONFIG_DIR/instance.public.pem openssl rsa -in $CONFIG_DIR/instance.private.pem -outform PEM -pubout -out $CONFIG_DIR/instance.public.pem
@@ -12,8 +12,8 @@ else
fi fi
while true; do while true; do
[[ -f $CONFIG_DIR/webserver.key ]] && default_answer="N" || default_answer="Y" [ -f $CONFIG_DIR/webserver.key ] && default_answer="N" || default_answer="Y"
[[ $default_answer == "Y" ]] && V="Y/n" || V="y/N" [ $default_answer == "Y" ] && V="Y/n" || V="y/N"
read -p "> Do you wish to create self-signed webserver certificate? [${V}]" yn read -p "> Do you wish to create self-signed webserver certificate? [${V}]" yn
yn=${yn:-$default_answer} # ${parameter:-word} If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted. yn=${yn:-$default_answer} # ${parameter:-word} If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted.
case $yn in case $yn in
@@ -27,7 +27,7 @@ while true; do
esac esac
done done
if [[ -f $CONFIG_DIR/webserver.key ]]; then if [ -f $CONFIG_DIR/webserver.key ]; then
echo "> Starting service ..." echo "> Starting service ..."
systemctl start fastapi-dls.service systemctl start fastapi-dls.service

View File

@@ -1,8 +1,9 @@
#!/bin/bash #!/bin/bash
if [[ -f /etc/systemd/system/fastapi-dls.service ]]; then # is removed automatically
echo "> Removing service file." #if [ "$1" = purge ] && [ -d /usr/share/fastapi-dls ]; then
rm /etc/systemd/system/fastapi-dls.service # echo "> Removing app."
fi # rm -r /usr/share/fastapi-dls
#fi
# todo echo -e "> Done."

View File

@@ -1,5 +1,3 @@
#!/bin/bash #!/bin/bash
echo -e "> Starting uninstallation of 'fastapi-dls'!" echo -e "> Starting uninstallation of 'fastapi-dls'!"
# todo

View File

@@ -0,0 +1,11 @@
# https://packages.debian.org/hu/
fastapi==0.92.0
uvicorn[standard]==0.17.6
python-jose[pycryptodome]==3.3.0
pycryptodome==3.11.0
python-dateutil==2.8.2
sqlalchemy==1.4.46
markdown==3.4.1
python-dotenv==0.21.0
jinja2==3.1.2
httpx==0.23.3

View File

@@ -0,0 +1,10 @@
# https://packages.ubuntu.com
fastapi==0.101.0
uvicorn[standard]==0.27.1
python-jose[pycryptodome]==3.3.0
pycryptodome==3.20.0
python-dateutil==2.8.2
sqlalchemy==1.4.50
markdown==3.5.2
python-dotenv==1.0.1
jinja2==3.1.2

View File

@@ -0,0 +1,10 @@
# https://packages.ubuntu.com
fastapi==0.110.3
uvicorn[standard]==0.30.3
python-jose[pycryptodome]==3.3.0
pycryptodome==3.20.0
python-dateutil==2.9.0
sqlalchemy==2.0.32
markdown==3.6
python-dotenv==1.0.1
jinja2==3.1.3

View File

@@ -11,7 +11,8 @@ license=('MIT')
depends=('python' 'python-jose' 'python-starlette' 'python-httpx' 'python-fastapi' 'python-dotenv' 'python-dateutil' 'python-sqlalchemy' 'python-pycryptodome' 'uvicorn' 'python-markdown' 'openssl') depends=('python' 'python-jose' 'python-starlette' 'python-httpx' 'python-fastapi' 'python-dotenv' 'python-dateutil' 'python-sqlalchemy' 'python-pycryptodome' 'uvicorn' 'python-markdown' 'openssl')
provider=("$pkgname") provider=("$pkgname")
install="$pkgname.install" install="$pkgname.install"
source=('git+file:///builds/oscar.krause/fastapi-dls' # https://gitea.publichub.eu/oscar.krause/fastapi-dls.git backup=('etc/default/fastapi-dls')
source=("git+file://${CI_PROJECT_DIR}"
"$pkgname.default" "$pkgname.default"
"$pkgname.service" "$pkgname.service"
"$pkgname.tmpfiles") "$pkgname.tmpfiles")
@@ -21,8 +22,9 @@ sha256sums=('SKIP'
'3dc60140c08122a8ec0e7fa7f0937eb8c1288058890ba09478420fc30ce9e30c') '3dc60140c08122a8ec0e7fa7f0937eb8c1288058890ba09478420fc30ce9e30c')
pkgver() { pkgver() {
echo -e "VERSION=$VERSION\nCOMMIT=$CI_COMMIT_SHA" > $srcdir/$pkgname/version.env
source $srcdir/$pkgname/version.env source $srcdir/$pkgname/version.env
echo ${VERSION} echo $VERSION
} }
check() { check() {
@@ -46,6 +48,7 @@ package() {
install -Dm755 "$srcdir/$pkgname/app/main.py" "$pkgdir/opt/$pkgname/main.py" install -Dm755 "$srcdir/$pkgname/app/main.py" "$pkgdir/opt/$pkgname/main.py"
install -Dm755 "$srcdir/$pkgname/app/orm.py" "$pkgdir/opt/$pkgname/orm.py" install -Dm755 "$srcdir/$pkgname/app/orm.py" "$pkgdir/opt/$pkgname/orm.py"
install -Dm755 "$srcdir/$pkgname/app/util.py" "$pkgdir/opt/$pkgname/util.py" install -Dm755 "$srcdir/$pkgname/app/util.py" "$pkgdir/opt/$pkgname/util.py"
install -Dm755 "$srcdir/$pkgname/app/middleware.py" "$pkgdir/opt/$pkgname/middleware.py"
install -Dm644 "$srcdir/$pkgname.default" "$pkgdir/etc/default/$pkgname" install -Dm644 "$srcdir/$pkgname.default" "$pkgdir/etc/default/$pkgname"
install -Dm644 "$srcdir/$pkgname.service" "$pkgdir/usr/lib/systemd/system/$pkgname.service" install -Dm644 "$srcdir/$pkgname.service" "$pkgdir/usr/lib/systemd/system/$pkgname.service"
install -Dm644 "$srcdir/$pkgname.tmpfiles" "$pkgdir/usr/lib/tmpfiles.d/$pkgname.conf" install -Dm644 "$srcdir/$pkgname.tmpfiles" "$pkgdir/usr/lib/tmpfiles.d/$pkgname.conf"

48
.UNRAID/FastAPI-DLS.xml Normal file
View File

@@ -0,0 +1,48 @@
<?xml version="1.0"?>
<Container version="2">
<Name>FastAPI-DLS</Name>
<Repository>collinwebdesigns/fastapi-dls:latest</Repository>
<Registry>https://hub.docker.com/r/collinwebdesigns/fastapi-dls</Registry>
<Network>br0</Network>
<MyIP></MyIP>
<Shell>sh</Shell>
<Privileged>false</Privileged>
<Support/>
<Project/>
<Overview>Source:&#xD;
https://git.collinwebdesigns.de/oscar.krause/fastapi-dls#docker&#xD;
&#xD;
Make sure you create these certificates before starting the container for the first time:&#xD;
```&#xD;
# Check https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/-/tree/main/#docker for more information:&#xD;
WORKING_DIR=/mnt/user/appdata/fastapi-dls/cert&#xD;
mkdir -p $WORKING_DIR&#xD;
cd $WORKING_DIR&#xD;
# create instance private and public key for singing JWT's&#xD;
openssl genrsa -out $WORKING_DIR/instance.private.pem 2048 &#xD;
openssl rsa -in $WORKING_DIR/instance.private.pem -outform PEM -pubout -out $WORKING_DIR/instance.public.pem&#xD;
# create ssl certificate for integrated webserver (uvicorn) - because clients rely on ssl&#xD;
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout $WORKING_DIR/webserver.key -out $WORKING_DIR/webserver.crt&#xD;
```&#xD;
</Overview>
<Category/>
<WebUI>https://[IP]:[PORT:443]</WebUI>
<TemplateURL/>
<Icon>https://git.collinwebdesigns.de/uploads/-/system/project/avatar/106/png-transparent-nvidia-grid-logo-business-nvidia-electronics-text-trademark.png?width=64</Icon>
<ExtraParams>--restart always</ExtraParams>
<PostArgs/>
<CPUset/>
<DateInstalled>1679161568</DateInstalled>
<DonateText/>
<DonateLink/>
<Requires/>
<Config Name="HTTPS Port" Target="" Default="443" Mode="tcp" Description="Same as DLS Port below." Type="Port" Display="always-hide" Required="true" Mask="false">443</Config>
<Config Name="App Cert" Target="/app/cert" Default="/mnt/user/appdata/fastapi-dls/cert" Mode="rw" Description="[REQUIRED] Read the description above to make this folder. &#13;&#10;&#13;&#10;You do not need to change the path." Type="Path" Display="always-hide" Required="true" Mask="false">/mnt/user/appdata/fastapi-dls/cert</Config>
<Config Name="DLS Port" Target="DSL_PORT" Default="443" Mode="" Description="Choose port you want to use. Make sure to change the HTTPS port above to match it." Type="Variable" Display="always-hide" Required="true" Mask="false">443</Config>
<Config Name="App database" Target="/app/database" Default="/mnt/user/appdata/fastapi-dls/data" Mode="rw" Description="[REQUIRED] Read the description above to make this folder. &#13;&#10;&#13;&#10;You do not need to change the path." Type="Path" Display="always-hide" Required="true" Mask="false">/mnt/user/appdata/fastapi-dls/data</Config>
<Config Name="DSL IP" Target="DLS_URL" Default="localhost" Mode="" Description="Put your container's IP (or your host's IP if it's shared)." Type="Variable" Display="always-hide" Required="true" Mask="false"></Config>
<Config Name="Time Zone" Target="TZ" Default="" Mode="" Description="Format example: America/New_York. MUST MATCH YOUR CURRENT TIMEZONE AND THE GUEST VMS TIMEZONE! Otherwise you'll get into issues, read the guide above." Type="Variable" Display="always-hide" Required="true" Mask="false"></Config>
<Config Name="Database" Target="DATABASE" Default="sqlite:////app/database/db.sqlite" Mode="" Description="Set to sqlite:////app/database/db.sqlite" Type="Variable" Display="advanced-hide" Required="true" Mask="false">sqlite:////app/database/db.sqlite</Config>
<Config Name="Debug" Target="DEBUG" Default="true" Mode="" Description="true to enable debugging, false to disable them." Type="Variable" Display="advanced-hide" Required="false" Mask="false">true</Config>
<Config Name="Lease" Target="LEASE_EXPIRE_DAYS" Default="90" Mode="" Description="90 days is the maximum value." Type="Variable" Display="advanced" Required="false" Mask="false">90</Config>
</Container>

View File

@@ -0,0 +1,197 @@
#!/bin/bash
# This script automates the licensing of the vGPU guest driver
# on Unraid boot. Set the Schedule to: "At Startup of Array".
#
# Relies on FastAPI-DLS for the licensing.
# It assumes FeatureType=1 (vGPU), change it as you see fit in line <114>
#
# Requires `eflutils` to be installed in the system for `nvidia-gridd` to run
# To Install it:
# 1) You might find it here: https://packages.slackware.com/ (choose the 64bit version of Slackware)
# 2) Download the package and put it in /boot/extra to be installed on boot
# 3) a. Reboot to install it, OR
# b. Run `upgradepkg --install-new /boot/extra/elfutils*`
# [i]: Make sure to have only one version of elfutils, otherwise you might run into issues
# Sources and docs:
# https://docs.nvidia.com/grid/15.0/grid-vgpu-user-guide/index.html#configuring-nls-licensed-client-on-linux
#
################################################
# MAKE SURE YOU CHANGE THESE VARIABLES #
################################################
###### CHANGE ME!
# IP and PORT of FastAPI-DLS
DLS_IP=192.168.0.123
DLS_PORT=443
# Token folder, must be on a filesystem that supports
# linux filesystem permissions (eg: ext4,xfs,btrfs...)
TOKEN_PATH=/mnt/user/system/nvidia
PING=$(which ping)
# Check if the License is applied
if [[ "$(nvidia-smi -q | grep "Expiry")" == *Expiry* ]]; then
echo " [i] Your vGPU Guest drivers are already licensed."
echo " [i] $(nvidia-smi -q | grep "Expiry")"
echo " [<] Exiting..."
exit 0
fi
# Check if the FastAPI-DLS server is reachable
# Check if the License is applied
MAX_RETRIES=30
for i in $(seq 1 $MAX_RETRIES); do
echo -ne "\r [>] Attempt $i to connect to $DLS_IP."
if ping -c 1 $DLS_IP >/dev/null 2>&1; then
echo -e "\n [*] Connection successful."
break
fi
if [ $i -eq $MAX_RETRIES ]; then
echo -e "\n [!] Connection failed after $MAX_RETRIES attempts."
echo -e "\n [<] Exiting..."
exit 1
fi
sleep 1
done
# Check if the token folder exists
if [ -d "${TOKEN_PATH}" ]; then
echo " [*] Token Folder exists. Proceeding..."
else
echo " [!] Token Folder does not exists or not ready yet. Exiting."
echo " [!] Token Folder Specified: ${TOKEN_PATH}"
exit 1
fi
# Check if elfutils are installed, otherwise nvidia-gridd service
# wont start
if [ "$(grep -R "elfutils" /var/log/packages/* | wc -l)" != 0 ]; then
echo " [*] Elfutils is installed, proceeding..."
else
echo " [!] Elfutils is not installed, downloading and installing..."
echo " [!] Downloading elfutils to /boot/extra"
echo " [i] This script will download elfutils from slackware64-15.0 repository."
echo " [i] If you have a different version of Unraid (6.11.5), you might want to"
echo " [i] download and install a suitable version manually from the slackware"
echo " [i] repository, and put it in /boot/extra to be install on boot."
echo " [i] You may also install it by running: "
echo " [i] upgradepkg --install-new /path/to/elfutils-*.txz"
echo ""
echo " [>] Downloading elfutils from slackware64-15.0 repository:"
wget -q -nc --show-progress --progress=bar:force:noscroll -P /boot/extra https://slackware.uk/slackware/slackware64-15.0/slackware64/l/elfutils-0.186-x86_64-1.txz 2>/dev/null \
|| { echo " [!] Error while downloading elfutils, please download it and install it manually."; exit 1; }
echo ""
if upgradepkg --install-new /boot/extra/elfutils-0.186-x86_64-1.txz
then
echo " [*] Elfutils installed and will be installed automatically on boot"
else
echo " [!] Error while installing, check logs..."
exit 1
fi
fi
echo " [~] Sleeping for 60 seconds before continuing..."
echo " [i] The script is waiting until the boot process settles down."
for i in {60..1}; do
printf "\r [~] %d seconds remaining" "$i"
sleep 1
done
printf "\n"
create_token () {
echo " [>] Creating new token..."
if ${PING} -c1 ${DLS_IP} > /dev/null 2>&1
then
# curl --insecure -L -X GET https://${DLS_IP}:${DLS_PORT}/-/client-token -o ${TOKEN_PATH}/client_configuration_token_"$(date '+%d-%m-%Y-%H-%M-%S')".tok || { echo " [!] Could not get the token, please check the server."; exit 1;}
wget -q -nc -4c --no-check-certificate --show-progress --progress=bar:force:noscroll -O "${TOKEN_PATH}"/client_configuration_token_"$(date '+%d-%m-%Y-%H-%M-%S')".tok https://${DLS_IP}:${DLS_PORT}/-/client-token \
|| { echo " [!] Could not get the token, please check the server."; exit 1;}
chmod 744 "${TOKEN_PATH}"/*.tok || { echo " [!] Could not chmod the tokens."; exit 1; }
echo ""
echo " [*] Token downloaded and stored in ${TOKEN_PATH}."
else
echo " [!] Could not get token, DLS server unavailable ."
exit 1
fi
}
setup_run () {
echo " [>] Setting up gridd.conf"
cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf || { echo " [!] Error configuring gridd.conf, did you install the drivers correctly?"; exit 1; }
sed -i 's/FeatureType=0/FeatureType=1/g' /etc/nvidia/gridd.conf
echo "ClientConfigTokenPath=${TOKEN_PATH}" >> /etc/nvidia/gridd.conf
echo " [>] Creating /var/lib/nvidia folder structure"
mkdir -p /var/lib/nvidia/GridLicensing
echo " [>] Starting nvidia-gridd"
if pgrep nvidia-gridd >/dev/null 2>&1; then
echo " [!] nvidia-gridd service is running. Closing."
sh /usr/lib/nvidia/sysv/nvidia-gridd stop
stop_exit_code=$?
if [ $stop_exit_code -eq 0 ]; then
echo " [*] nvidia-gridd service stopped successfully."
else
echo " [!] Error while stopping nvidia-gridd service."
exit 1
fi
# Kill the service if it does not close
if pgrep nvidia-gridd >/dev/null 2>&1; then
kill -9 "$(pgrep nvidia-gridd)" || {
echo " [!] Error while closing nvidia-gridd service"
exit 1
}
fi
echo " [*] Restarting nvidia-gridd service."
sh /usr/lib/nvidia/sysv/nvidia-gridd start
if pgrep nvidia-gridd >/dev/null 2>&1; then
echo " [*] Service started, PID: $(pgrep nvidia-gridd)"
else
echo -e " [!] Error while starting nvidia-gridd service. Use strace -f nvidia-gridd to debug.\n [i] Check if elfutils is installed.\n [i] strace is not installed by default."
exit 1
fi
else
sh /usr/lib/nvidia/sysv/nvidia-gridd start
if pgrep nvidia-gridd >/dev/null 2>&1; then
echo " [*] Service started, PID: $(pgrep nvidia-gridd)"
else
echo -e " [!] Error while starting nvidia-gridd service. Use strace -f nvidia-gridd to debug.\n [i] Check if elfutils is installed.\n [i] strace is not installed by default."
exit 1
fi
fi
}
for token in "${TOKEN_PATH}"/*; do
if [ "${token: -4}" == ".tok" ]
then
echo " [*] Tokens found..."
setup_run
else
echo " [!] No Tokens found..."
create_token
setup_run
fi
done
while true; do
if nvidia-smi -q | grep "Expiry" >/dev/null 2>&1; then
echo " [>] vGPU licensed!"
echo " [i] $(nvidia-smi -q | grep "Expiry")"
break
else
echo -ne " [>] vGPU not licensed yet... Checking again in 5 seconds\c"
for i in {1..5}; do
sleep 1
echo -ne ".\c"
done
echo -ne "\r\c"
fi
done
echo " [>] Done..."
exit 0

View File

@@ -1,7 +1,9 @@
version: "2"
plugins: plugins:
bandit: bandit:
enabled: true enabled: true
sonar-python: sonar-python:
enabled: true enabled: true
pylint: config:
enabled: true tests_patterns:
- test/**

View File

@@ -1,6 +1,16 @@
include:
- template: Jobs/Code-Quality.gitlab-ci.yml
- template: Jobs/Secret-Detection.gitlab-ci.yml
- template: Jobs/SAST.gitlab-ci.yml
- template: Jobs/Container-Scanning.gitlab-ci.yml
- template: Jobs/Dependency-Scanning.gitlab-ci.yml
cache: cache:
key: one-key-to-rule-them-all key: one-key-to-rule-them-all
variables:
DOCKER_BUILDX_PLATFORM: "linux/amd64,linux/arm64"
build:docker: build:docker:
image: docker:dind image: docker:dind
interruptible: true interruptible: true
@@ -10,29 +20,42 @@ build:docker:
changes: changes:
- app/**/* - app/**/*
- Dockerfile - Dockerfile
- requirements.txt
- if: $CI_PIPELINE_SOURCE == 'merge_request_event' - if: $CI_PIPELINE_SOURCE == 'merge_request_event'
tags: [ docker ] tags: [ docker ]
before_script: before_script:
- echo "COMMIT=${CI_COMMIT_SHA}" >> version.env # COMMIT=`git rev-parse HEAD` - docker buildx inspect
- docker buildx create --use
script: script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build . --tag ${CI_REGISTRY}/${CI_PROJECT_PATH}/${CI_BUILD_REF_NAME}:${CI_BUILD_REF} - IMAGE=$CI_REGISTRY/$CI_PROJECT_PATH/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHA
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}/${CI_BUILD_REF_NAME}:${CI_BUILD_REF} - docker buildx build --progress=plain --platform $DOCKER_BUILDX_PLATFORM --build-arg VERSION=$CI_COMMIT_REF_NAME --build-arg COMMIT=$CI_COMMIT_SHA --tag $IMAGE --push .
- docker buildx imagetools inspect $IMAGE
- echo "CS_IMAGE=$IMAGE" > container_scanning.env
artifacts:
reports:
dotenv: container_scanning.env
build:apt: build:apt:
image: debian:bookworm-slim image: debian:bookworm-slim
interruptible: true interruptible: true
stage: build stage: build
rules: rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH - if: $CI_COMMIT_TAG
variables:
VERSION: $CI_COMMIT_REF_NAME
- if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
changes: changes:
- app/**/* - app/**/*
- .DEBIAN/**/* - .DEBIAN/**/*
- .gitlab-ci.yml
variables:
VERSION: "0.0.1"
- if: $CI_PIPELINE_SOURCE == 'merge_request_event' - if: $CI_PIPELINE_SOURCE == 'merge_request_event'
variables:
VERSION: "0.0.1"
before_script: before_script:
- echo "COMMIT=${CI_COMMIT_SHA}" >> version.env - echo -e "VERSION=$VERSION\nCOMMIT=$CI_COMMIT_SHA" > version.env
- source version.env
# install build dependencies # install build dependencies
- apt-get update -qq && apt-get install -qq -y build-essential - apt-get update -qq && apt-get install -qq -y build-essential
# create build directory for .deb sources # create build directory for .deb sources
@@ -53,7 +76,7 @@ build:apt:
# cd into "build/" # cd into "build/"
- cd build/ - cd build/
script: script:
# set version based on value in "$VERSION" (which is set above from version.env) # set version based on value in "$CI_COMMIT_REF_NAME"
- sed -i -E 's/(Version\:\s)0.0/\1'"$VERSION"'/g' DEBIAN/control - sed -i -E 's/(Version\:\s)0.0/\1'"$VERSION"'/g' DEBIAN/control
# build # build
- dpkg -b . build.deb - dpkg -b . build.deb
@@ -68,14 +91,21 @@ build:pacman:
interruptible: true interruptible: true
stage: build stage: build
rules: rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH - if: $CI_COMMIT_TAG
variables:
VERSION: $CI_COMMIT_REF_NAME
- if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
changes: changes:
- app/**/* - app/**/*
- .PKGBUILD/**/* - .PKGBUILD/**/*
- .gitlab-ci.yml
variables:
VERSION: "0.0.1"
- if: $CI_PIPELINE_SOURCE == 'merge_request_event' - if: $CI_PIPELINE_SOURCE == 'merge_request_event'
variables:
VERSION: "0.0.1"
before_script: before_script:
- echo "COMMIT=${CI_COMMIT_SHA}" >> version.env #- echo -e "VERSION=$VERSION\nCOMMIT=$CI_COMMIT_SHA" > version.env
# install build dependencies # install build dependencies
- pacman -Syu --noconfirm git - pacman -Syu --noconfirm git
# create a build-user because "makepkg" don't like root user # create a build-user because "makepkg" don't like root user
@@ -90,34 +120,55 @@ build:pacman:
# download dependencies # download dependencies
- source PKGBUILD && pacman -Syu --noconfirm --needed --asdeps "${makedepends[@]}" "${depends[@]}" - source PKGBUILD && pacman -Syu --noconfirm --needed --asdeps "${makedepends[@]}" "${depends[@]}"
# build # build
- sudo -u build makepkg -s - sudo --preserve-env -u build makepkg -s
artifacts: artifacts:
expire_in: 1 week expire_in: 1 week
paths: paths:
- "*.pkg.tar.zst" - "*.pkg.tar.zst"
test: test:
image: python:3.11-slim-bullseye image: $IMAGE
stage: test stage: test
interruptible: true
rules: rules:
- if: $CI_COMMIT_BRANCH - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_COMMIT_TAG
- if: $CI_PIPELINE_SOURCE == "merge_request_event" - if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
changes:
- app/**/*
- test/**/*
variables: variables:
DATABASE: sqlite:///../app/db.sqlite DATABASE: sqlite:///../app/db.sqlite
parallel:
matrix:
- IMAGE: [ 'python:3.12-slim-bookworm' ]
REQUIREMENTS: [ 'requirements.txt' ]
- IMAGE: [ 'debian:bookworm' ] # EOL: June 06, 2026
REQUIREMENTS: [ '.DEBIAN/requirements-bookworm-12.txt' ]
- IMAGE: [ 'ubuntu:24.04' ] # EOL: April 2036
REQUIREMENTS: [ '.DEBIAN/requirements-ubuntu-24.04.txt' ]
- IMAGE: [ 'ubuntu:24.10' ]
REQUIREMENTS: [ '.DEBIAN/requirements-ubuntu-24.10.txt' ]
before_script: before_script:
- pip install -r requirements.txt - apt-get update && apt-get install -y python3-dev python3-pip python3-venv gcc
- python3 -m venv venv
- source venv/bin/activate
- pip install --upgrade pip
- pip install -r $REQUIREMENTS
- pip install pytest httpx - pip install pytest httpx
- mkdir -p app/cert - mkdir -p app/cert
- openssl genrsa -out app/cert/instance.private.pem 2048 - openssl genrsa -out app/cert/instance.private.pem 2048
- openssl rsa -in app/cert/instance.private.pem -outform PEM -pubout -out app/cert/instance.public.pem - openssl rsa -in app/cert/instance.private.pem -outform PEM -pubout -out app/cert/instance.public.pem
- cd test - cd test
script: script:
- pytest main.py - python -m pytest main.py --junitxml=report.xml
artifacts: artifacts:
reports: reports:
dotenv: version.env dotenv: version.env
junit: ['**/report.xml']
.test:linux: .test:apt:
stage: test stage: test
rules: rules:
- if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
@@ -156,15 +207,15 @@ test:
- apt-get purge -qq -y fastapi-dls - apt-get purge -qq -y fastapi-dls
- apt-get autoremove -qq -y && apt-get clean -qq - apt-get autoremove -qq -y && apt-get clean -qq
test:debian: test:apt:debian:
extends: .test:linux extends: .test:apt
image: debian:bookworm-slim image: debian:bookworm-slim
test:ubuntu: test:apt:ubuntu:
extends: .test:linux extends: .test:apt
image: ubuntu:22.10 image: ubuntu:24.04
test:archlinux: test:pacman:archlinux:
image: archlinux:base image: archlinux:base
rules: rules:
- if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
@@ -179,42 +230,103 @@ test:archlinux:
- pacman -Sy - pacman -Sy
- pacman -U --noconfirm *.pkg.tar.zst - pacman -U --noconfirm *.pkg.tar.zst
code_quality:
variables:
SOURCE_CODE: app
rules:
- if: $CODE_QUALITY_DISABLED
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
secret_detection:
rules:
- if: $SECRET_DETECTION_DISABLED
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
before_script:
- git config --global --add safe.directory $CI_PROJECT_DIR
semgrep-sast:
rules:
- if: $SAST_DISABLED
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
test_coverage:
# extends: test
image: python:3.12-slim-bookworm
allow_failure: true
stage: test
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
variables:
DATABASE: sqlite:///../app/db.sqlite
before_script:
- apt-get update && apt-get install -y python3-dev gcc
- pip install -r requirements.txt
- pip install pytest httpx
- mkdir -p app/cert
- openssl genrsa -out app/cert/instance.private.pem 2048
- openssl rsa -in app/cert/instance.private.pem -outform PEM -pubout -out app/cert/instance.public.pem
- cd test
script:
- pip install pytest pytest-cov
- coverage run -m pytest main.py
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: '**/coverage.xml'
container_scanning:
dependencies: [ build:docker ]
rules:
- if: $CONTAINER_SCANNING_DISABLED
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
gemnasium-python-dependency_scanning:
rules:
- if: $DEPENDENCY_SCANNING_DISABLED
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
.deploy: .deploy:
rules: rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_COMMIT_TAG - if: $CI_COMMIT_TAG
when: never
deploy:docker: deploy:docker:
extends: .deploy extends: .deploy
image: docker:dind
stage: deploy stage: deploy
rules: tags: [ docker ]
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
before_script: before_script:
- echo "COMMIT=${CI_COMMIT_SHA}" >> version.env - echo "Building docker image for commit $CI_COMMIT_SHA with version $CI_COMMIT_REF_NAME"
- source version.env - docker buildx inspect
- echo "Building docker image for commit ${COMMIT} with version ${VERSION}" - docker buildx create --use
script: script:
- echo "GitLab-Registry" - echo "========== GitLab-Registry =========="
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build . --tag ${CI_REGISTRY}/${CI_PROJECT_PATH}/${CI_BUILD_REF_NAME}:${VERSION} - IMAGE=$CI_REGISTRY/$CI_PROJECT_PATH
- docker build . --tag ${CI_REGISTRY}/${CI_PROJECT_PATH}/${CI_BUILD_REF_NAME}:latest - docker buildx build --progress=plain --platform $DOCKER_BUILDX_PLATFORM --build-arg VERSION=$CI_COMMIT_REF_NAME --build-arg COMMIT=$CI_COMMIT_SHA --tag $IMAGE:$CI_COMMIT_REF_NAME --push .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}/${CI_BUILD_REF_NAME}:${VERSION} - docker buildx build --progress=plain --platform $DOCKER_BUILDX_PLATFORM --build-arg VERSION=$CI_COMMIT_REF_NAME --build-arg COMMIT=$CI_COMMIT_SHA --tag $IMAGE:latest --push .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}/${CI_BUILD_REF_NAME}:latest - echo "========== Docker-Hub =========="
- echo "Docker-Hub"
- docker login -u $PUBLIC_REGISTRY_USER -p $PUBLIC_REGISTRY_TOKEN - docker login -u $PUBLIC_REGISTRY_USER -p $PUBLIC_REGISTRY_TOKEN
- docker build . --tag $PUBLIC_REGISTRY_USER/${CI_PROJECT_NAME}:${VERSION} - IMAGE=$PUBLIC_REGISTRY_USER/$CI_PROJECT_NAME
- docker build . --tag $PUBLIC_REGISTRY_USER/${CI_PROJECT_NAME}:latest - docker buildx build --progress=plain --platform $DOCKER_BUILDX_PLATFORM --build-arg VERSION=$CI_COMMIT_REF_NAME --build-arg COMMIT=$CI_COMMIT_SHA --tag $IMAGE:$CI_COMMIT_REF_NAME --push .
- docker push $PUBLIC_REGISTRY_USER/${CI_PROJECT_NAME}:${VERSION} - docker buildx build --progress=plain --platform $DOCKER_BUILDX_PLATFORM --build-arg VERSION=$CI_COMMIT_REF_NAME --build-arg COMMIT=$CI_COMMIT_SHA --tag $IMAGE:latest --push .
- docker push $PUBLIC_REGISTRY_USER/${CI_PROJECT_NAME}:latest
deploy:apt: deploy:apt:
# doc: https://git.collinwebdesigns.de/help/user/packages/debian_repository/index.md#install-a-package # doc: https://git.collinwebdesigns.de/help/user/packages/debian_repository/index.md#install-a-package
extends: .deploy extends: .deploy
image: debian:bookworm-slim image: debian:bookworm-slim
stage: deploy stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
needs: needs:
- job: build:apt - job: build:apt
artifacts: true artifacts: true
@@ -254,18 +366,15 @@ deploy:pacman:
extends: .deploy extends: .deploy
image: archlinux:base-devel image: archlinux:base-devel
stage: deploy stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
needs: needs:
- job: build:pacman - job: build:pacman
artifacts: true artifacts: true
script: script:
- source .PKGBUILD/PKGBUILD - source .PKGBUILD/PKGBUILD
- source version.env
# fastapi-dls-1.0-1-any.pkg.tar.zst # fastapi-dls-1.0-1-any.pkg.tar.zst
- BUILD_NAME=${pkgname}-${VERSION}-${pkgrel}-any.pkg.tar.zst - BUILD_NAME=${pkgname}-${CI_COMMIT_REF_NAME}-${pkgrel}-any.pkg.tar.zst
- PACKAGE_NAME=${pkgname} - PACKAGE_NAME=${pkgname}
- PACKAGE_VERSION=${VERSION} - PACKAGE_VERSION=${CI_COMMIT_REF_NAME}
- PACKAGE_ARCH=any - PACKAGE_ARCH=any
- EXPORT_NAME=${BUILD_NAME} - EXPORT_NAME=${BUILD_NAME}
- 'echo "PACKAGE_NAME: ${PACKAGE_NAME}"' - 'echo "PACKAGE_NAME: ${PACKAGE_NAME}"'
@@ -277,19 +386,15 @@ deploy:pacman:
release: release:
image: registry.gitlab.com/gitlab-org/release-cli:latest image: registry.gitlab.com/gitlab-org/release-cli:latest
stage: .post stage: .post
needs: needs: [ test ]
- job: test
artifacts: true
rules: rules:
- if: $CI_COMMIT_TAG - if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script: script:
- echo "Running release-job for $VERSION" - echo "Running release-job for $CI_COMMIT_TAG"
release: release:
name: $CI_PROJECT_TITLE $VERSION name: $CI_PROJECT_TITLE $CI_COMMIT_TAG
description: Release of $CI_PROJECT_TITLE version $VERSION description: Release of $CI_PROJECT_TITLE version $CI_COMMIT_TAG
tag_name: $VERSION tag_name: $CI_COMMIT_TAG
ref: $CI_COMMIT_SHA ref: $CI_COMMIT_SHA
assets: assets:
links: links:

View File

@@ -1,17 +1,20 @@
FROM python:3.11-alpine FROM python:3.12-alpine
ARG VERSION
ARG COMMIT=""
RUN echo -e "VERSION=$VERSION\nCOMMIT=$COMMIT" > /version.env
COPY requirements.txt /tmp/requirements.txt COPY requirements.txt /tmp/requirements.txt
RUN apk update \ RUN apk update \
&& apk add --no-cache --virtual build-deps gcc g++ python3-dev musl-dev \ && apk add --no-cache --virtual build-deps gcc g++ python3-dev musl-dev pkgconfig \
&& apk add --no-cache curl postgresql postgresql-dev mariadb-connector-c-dev sqlite-dev \ && apk add --no-cache curl postgresql postgresql-dev mariadb-dev sqlite-dev \
&& pip install --no-cache-dir --upgrade uvicorn \ && pip install --no-cache-dir --upgrade uvicorn \
&& pip install --no-cache-dir psycopg2==2.9.5 mysqlclient==2.1.1 pysqlite3==0.5.0 \ && pip install --no-cache-dir psycopg2==2.9.10 mysqlclient==2.2.6 pysqlite3==0.5.4 \
&& pip install --no-cache-dir -r /tmp/requirements.txt \ && pip install --no-cache-dir -r /tmp/requirements.txt \
&& apk del build-deps && apk del build-deps
COPY app /app COPY app /app
COPY version.env /version.env
COPY README.md /README.md COPY README.md /README.md
HEALTHCHECK --start-period=30s --interval=10s --timeout=5s --retries=3 CMD curl --insecure --fail https://localhost/-/health || exit 1 HEALTHCHECK --start-period=30s --interval=10s --timeout=5s --retries=3 CMD curl --insecure --fail https://localhost/-/health || exit 1

411
README.md
View File

@@ -2,22 +2,58 @@
Minimal Delegated License Service (DLS). Minimal Delegated License Service (DLS).
Compatibility tested with official DLS 2.0.1. Compatibility tested with official NLS 2.0.1, 2.1.0, 3.1.0, 3.3.1. For Driver compatibility
see [compatibility matrix](#vgpu-software-compatibility-matrix).
This service can be used without internet connection. This service can be used without internet connection.
Only the clients need a connection to this service on configured port. Only the clients need a connection to this service on configured port.
[[_TOC_]] **Official Links**
* https://git.collinwebdesigns.de/oscar.krause/fastapi-dls (Private Git)
* https://gitea.publichub.eu/oscar.krause/fastapi-dls (Public Git)
* https://hub.docker.com/r/collinwebdesigns/fastapi-dls (Docker-Hub `collinwebdesigns/fastapi-dls:latest`)
*All other repositories are forks! (which is no bad - just for information and bug reports)*
[Releases & Release Notes](https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/-/releases)
**Further Reading**
* [NVIDIA vGPU Guide](https://gitlab.com/polloloco/vgpu-proxmox) - This document serves as a guide to install NVIDIA vGPU host drivers on the latest Proxmox VE version
* [vgpu_unlock](https://github.com/DualCoder/vgpu_unlock) - Unlock vGPU functionality for consumer-grade Nvidia GPUs.
* [vGPU_Unlock Wiki](https://docs.google.com/document/d/1pzrWJ9h-zANCtyqRgS7Vzla0Y8Ea2-5z2HEi4X75d2Q) - Guide for `vgpu_unlock`
* [Proxmox All-In-One Installer Script](https://wvthoog.nl/proxmox-vgpu-v3/) - Also known as `proxmox-installer.sh`
---
[TOC]
# Setup (Service) # Setup (Service)
**System requirements**
- 256mb ram
- 4gb hdd
- *maybe IPv6 must be disabled*
Tested with Ubuntu 22.10 (EOL!) (from Proxmox templates), actually its consuming 100mb ram and 750mb hdd.
**Prepare your system**
- Make sure your timezone is set correct on you fastapi-dls server and your client
This guide does not show how to install vGPU host drivers! Look at the official documentation packed with the driver
releases.
## Docker ## Docker
Docker-Images are available here: Docker-Images are available here for Intel (x86), AMD (amd64) and ARM (arm64):
- [Docker-Hub](https://hub.docker.com/repository/docker/collinwebdesigns/fastapi-dls): `collinwebdesigns/fastapi-dls:latest` - [Docker-Hub](https://hub.docker.com/repository/docker/collinwebdesigns/fastapi-dls): `collinwebdesigns/fastapi-dls:latest`
- [GitLab-Registry](https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/container_registry): `registry.git.collinwebdesigns.de/oscar.krause/fastapi-dls/main:latest` - [GitLab-Registry](https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/container_registry): `registry.git.collinwebdesigns.de/oscar.krause/fastapi-dls:latest`
The images include database drivers for `postgres`, `mariadb` and `sqlite`.
**Run this on the Docker-Host** **Run this on the Docker-Host**
@@ -43,16 +79,20 @@ docker run -e DLS_URL=`hostname -i` -e DLS_PORT=443 -p 443:443 -v $WORKING_DIR:/
**Docker-Compose / Deploy stack** **Docker-Compose / Deploy stack**
Goto [`docker-compose.yml`](docker-compose.yml) for more advanced example (with reverse proxy usage). See [`examples`](examples) directory for more advanced examples (with reverse proxy usage).
> Adjust *REQUIRED* variables as needed
```yaml ```yaml
version: '3.9' version: '3.9'
x-dls-variables: &dls-variables x-dls-variables: &dls-variables
TZ: Europe/Berlin # REQUIRED, set your timezone correctly on fastapi-dls AND YOUR CLIENTS !!!
DLS_URL: localhost # REQUIRED, change to your ip or hostname DLS_URL: localhost # REQUIRED, change to your ip or hostname
DLS_PORT: 443 DLS_PORT: 443
LEASE_EXPIRE_DAYS: 90 LEASE_EXPIRE_DAYS: 90 # 90 days is maximum
DATABASE: sqlite:////app/database/db.sqlite DATABASE: sqlite:////app/database/db.sqlite
DEBUG: false
services: services:
dls: dls:
@@ -65,14 +105,22 @@ services:
volumes: volumes:
- /opt/docker/fastapi-dls/cert:/app/cert - /opt/docker/fastapi-dls/cert:/app/cert
- dls-db:/app/database - dls-db:/app/database
logging: # optional, for those who do not need logs
driver: "json-file"
options:
max-file: 5
max-size: 10m
volumes: volumes:
dls-db: dls-db:
``` ```
## Debian/Ubuntu (manual method using `git clone` and python virtual environment) ## Debian / Ubuntu / macOS (manual method using `git clone` and python virtual environment)
Tested on `Debian 11 (bullseye)`, Ubuntu may also work. Tested on `Debian 11 (bullseye)`, `Debian 12 (bookworm)` and `macOS Ventura (13.6)`, Ubuntu may also work.
**Please note that setup on macOS differs from Debian based systems.**
**Make sure you are logged in as root.**
**Install requirements** **Install requirements**
@@ -98,7 +146,7 @@ chown -R www-data:www-data $WORKING_DIR
```shell ```shell
WORKING_DIR=/opt/fastapi-dls/app/cert WORKING_DIR=/opt/fastapi-dls/app/cert
mkdir $WORKING_DIR mkdir -p $WORKING_DIR
cd $WORKING_DIR cd $WORKING_DIR
# create instance private and public key for singing JWT's # create instance private and public key for singing JWT's
openssl genrsa -out $WORKING_DIR/instance.private.pem 2048 openssl genrsa -out $WORKING_DIR/instance.private.pem 2048
@@ -114,12 +162,17 @@ This is only to test whether the service starts successfully.
```shell ```shell
cd /opt/fastapi-dls/app cd /opt/fastapi-dls/app
sudo -u www-data /opt/fastapi-dls/venv/bin/uvicorn main:app --app-dir=/opt/fastapi-dls/app
# or
su - www-data -c "/opt/fastapi-dls/venv/bin/uvicorn main:app --app-dir=/opt/fastapi-dls/app" su - www-data -c "/opt/fastapi-dls/venv/bin/uvicorn main:app --app-dir=/opt/fastapi-dls/app"
``` ```
**Create config file** **Create config file**
> Adjust `DLS_URL` as needed (accessing from LAN won't work with 127.0.0.1)
```shell ```shell
mkdir /etc/fastapi-dls
cat <<EOF >/etc/fastapi-dls/env cat <<EOF >/etc/fastapi-dls/env
DLS_URL=127.0.0.1 DLS_URL=127.0.0.1
DLS_PORT=443 DLS_PORT=443
@@ -164,7 +217,112 @@ EOF
Now you have to run `systemctl daemon-reload`. After that you can start service Now you have to run `systemctl daemon-reload`. After that you can start service
with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`. with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`.
## Debian/Ubuntu (using `dpkg`) ## openSUSE Leap (manual method using `git clone` and python virtual environment)
Tested on `openSUSE Leap 15.4`, openSUSE Tumbleweed may also work.
**Install requirements**
```shell
zypper in -y python310 python3-virtualenv python3-pip
```
**Install FastAPI-DLS**
```shell
BASE_DIR=/opt/fastapi-dls
SERVICE_USER=dls
mkdir -p ${BASE_DIR}
cd ${BASE_DIR}
git clone https://git.collinwebdesigns.de/oscar.krause/fastapi-dls .
python3.10 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
deactivate
useradd -r ${SERVICE_USER} -M -d /opt/fastapi-dls
chown -R ${SERVICE_USER} ${BASE_DIR}
```
**Create keypair and webserver certificate**
```shell
CERT_DIR=${BASE_DIR}/app/cert
SERVICE_USER=dls
mkdir ${CERT_DIR}
cd ${CERT_DIR}
# create instance private and public key for singing JWT's
openssl genrsa -out ${CERT_DIR}/instance.private.pem 2048
openssl rsa -in ${CERT_DIR}/instance.private.pem -outform PEM -pubout -out ${CERT_DIR}/instance.public.pem
# create ssl certificate for integrated webserver (uvicorn) - because clients rely on ssl
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ${CERT_DIR}/webserver.key -out ${CERT_DIR}/webserver.crt
chown -R ${SERVICE_USER} ${CERT_DIR}
```
**Test Service**
This is only to test whether the service starts successfully.
```shell
BASE_DIR=/opt/fastapi-dls
SERVICE_USER=dls
cd ${BASE_DIR}
sudo -u ${SERVICE_USER} ${BASE_DIR}/venv/bin/uvicorn main:app --app-dir=${BASE_DIR}/app
# or
su - ${SERVICE_USER} -c "${BASE_DIR}/venv/bin/uvicorn main:app --app-dir=${BASE_DIR}/app"
```
**Create config file**
> Adjust `DLS_URL` as needed (accessing from LAN won't work with 127.0.0.1)
```shell
BASE_DIR=/opt/fastapi-dls
cat <<EOF >/etc/fastapi-dls/env
DLS_URL=127.0.0.1
DLS_PORT=443
LEASE_EXPIRE_DAYS=90
DATABASE=sqlite:///${BASE_DIR}/app/db.sqlite
EOF
```
**Create service**
```shell
BASE_DIR=/opt/fastapi-dls
SERVICE_USER=dls
cat <<EOF >/etc/systemd/system/fastapi-dls.service
[Unit]
Description=Service for fastapi-dls vGPU licensing service
After=network.target
[Service]
User=${SERVICE_USER}
AmbientCapabilities=CAP_NET_BIND_SERVICE
WorkingDirectory=${BASE_DIR}/app
EnvironmentFile=/etc/fastapi-dls/env
ExecStart=${BASE_DIR}/venv/bin/uvicorn main:app \\
--env-file /etc/fastapi-dls/env \\
--host \$DLS_URL --port \$DLS_PORT \\
--app-dir ${BASE_DIR}/app \\
--ssl-keyfile ${BASE_DIR}/app/cert/webserver.key \\
--ssl-certfile ${BASE_DIR}/app/cert/webserver.crt \\
--proxy-headers
Restart=always
KillSignal=SIGQUIT
Type=simple
NotifyAccess=all
[Install]
WantedBy=multi-user.target
EOF
```
Now you have to run `systemctl daemon-reload`. After that you can start service
with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`.
## Debian / Ubuntu (using `dpkg` / `apt`)
Packages are available here: Packages are available here:
@@ -172,8 +330,11 @@ Packages are available here:
Successful tested with: Successful tested with:
- Debian 12 (Bookworm) (works but not recommended because it is currently in *testing* state) - **Debian 12 (Bookworm)** (EOL: June 06, 2026)
- Ubuntu 22.10 (Kinetic Kudu) - *Ubuntu 22.10 (Kinetic Kudu)* (EOL: July 20, 2023)
- *Ubuntu 23.04 (Lunar Lobster)* (EOL: January 2024)
- *Ubuntu 23.10 (Mantic Minotaur)* (EOL: July 2024)
- **Ubuntu 24.04 (Noble Numbat)** (EOL: April 2036)
Not working with: Not working with:
@@ -194,6 +355,7 @@ apt-get install -f --fix-missing
``` ```
Start with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`. Start with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`.
Now you have to edit `/etc/fastapi-dls/env` as needed.
## ArchLinux (using `pacman`) ## ArchLinux (using `pacman`)
@@ -206,13 +368,31 @@ Packages are available here:
```shell ```shell
pacman -Sy pacman -Sy
FILENAME=/opt/fastapi-dls.pkg.tar.zst FILENAME=/opt/fastapi-dls.pkg.tar.zst
url -o $FILENAME <download-url>
curl -o $FILENAME <download-url>
# or
wget -O $FILENAME <download-url>
pacman -U --noconfirm fastapi-dls.pkg.tar.zst pacman -U --noconfirm fastapi-dls.pkg.tar.zst
``` ```
Start with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`. Start with `systemctl start fastapi-dls.service` and enable autostart with `systemctl enable fastapi-dls.service`.
Now you have to edit `/etc/default/fastapi-dls` as needed.
## Let's Encrypt Certificate ## unRAID
1. Download [this xml file](.UNRAID/FastAPI-DLS.xml)
2. Put it in /boot/config/plugins/dockerMan/templates-user/
3. Go to Docker page, scroll down to `Add Container`, click on Template list and choose `FastAPI-DLS`
4. Open terminal/ssh, follow the instructions in overview description
5. Setup your container `IP`, `Port`, `DLS_URL` and `DLS_PORT`
6. Apply and let it boot up
*Unraid users must also make sure they have Host access to custom networks enabled if unraid is the vgpu guest*.
Continue [here](#unraid-guest) for docker guest setup.
## Let's Encrypt Certificate (optional)
If you're using installation via docker, you can use `traefik`. Please refer to their documentation. If you're using installation via docker, you can use `traefik`. Please refer to their documentation.
@@ -230,21 +410,22 @@ After first success you have to replace `--issue` with `--renew`.
# Configuration # Configuration
| Variable | Default | Usage | | Variable | Default | Usage |
|------------------------|----------------------------------------|------------------------------------------------------------------------------------------------------| |--------------------------|----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| `DEBUG` | `false` | Toggles `fastapi` debug mode | | `DEBUG` | `false` | Toggles `fastapi` debug mode |
| `DLS_URL` | `localhost` | Used in client-token to tell guest driver where dls instance is reachable | | `DLS_URL` | `localhost` | Used in client-token to tell guest driver where dls instance is reachable |
| `DLS_PORT` | `443` | Used in client-token to tell guest driver where dls instance is reachable | | `DLS_PORT` | `443` | Used in client-token to tell guest driver where dls instance is reachable |
| `TOKEN_EXPIRE_DAYS` | `1` | Client auth-token validity (used for authenticate client against api, **not `.tok` file!**) | | `TOKEN_EXPIRE_DAYS` | `1` | Client auth-token validity (used for authenticate client against api, **not `.tok` file!**) |
| `LEASE_EXPIRE_DAYS` | `90` | Lease time in days | | `LEASE_EXPIRE_DAYS` | `90` | Lease time in days |
| `LEASE_RENEWAL_PERIOD` | `0.15` | The percentage of the lease period that must elapse before a licensed client can renew a license \*1 | | `LEASE_RENEWAL_PERIOD` | `0.15` | The percentage of the lease period that must elapse before a licensed client can renew a license \*1 |
| `DATABASE` | `sqlite:///db.sqlite` | See [official SQLAlchemy docs](https://docs.sqlalchemy.org/en/14/core/engines.html) | | `DATABASE` | `sqlite:///db.sqlite` | See [official SQLAlchemy docs](https://docs.sqlalchemy.org/en/14/core/engines.html) |
| `CORS_ORIGINS` | `https://{DLS_URL}` | Sets `Access-Control-Allow-Origin` header (comma separated string) \*2 | | `CORS_ORIGINS` | `https://{DLS_URL}` | Sets `Access-Control-Allow-Origin` header (comma separated string) \*2 |
| `SITE_KEY_XID` | `00000000-0000-0000-0000-000000000000` | Site identification uuid | | `SITE_KEY_XID` | `00000000-0000-0000-0000-000000000000` | Site identification uuid |
| `INSTANCE_REF` | `10000000-0000-0000-0000-000000000001` | Instance identification uuid | | `INSTANCE_REF` | `10000000-0000-0000-0000-000000000001` | Instance identification uuid |
| `ALLOTMENT_REF` | `20000000-0000-0000-0000-000000000001` | Allotment identification uuid | | `ALLOTMENT_REF` | `20000000-0000-0000-0000-000000000001` | Allotment identification uuid |
| `INSTANCE_KEY_RSA` | `<app-dir>/cert/instance.private.pem` | Site-wide private RSA key for singing JWTs \*3 | | `INSTANCE_KEY_RSA` | `<app-dir>/cert/instance.private.pem` | Site-wide private RSA key for singing JWTs \*3 |
| `INSTANCE_KEY_PUB` | `<app-dir>/cert/instance.public.pem` | Site-wide public key \*3 | | `INSTANCE_KEY_PUB` | `<app-dir>/cert/instance.public.pem` | Site-wide public key \*3 |
| `SUPPORT_MALFORMED_JSON` | `false` | Support parsing for mal formatted "mac_address_list" ([Issue](https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/-/issues/1)) |
\*1 For example, if the lease period is one day and the renewal period is 20%, the client attempts to renew its license \*1 For example, if the lease period is one day and the renewal period is 20%, the client attempts to renew its license
every 4.8 hours. If network connectivity is lost, the loss of connectivity is detected during license renewal and the every 4.8 hours. If network connectivity is lost, the loss of connectivity is detected during license renewal and the
@@ -252,66 +433,115 @@ client has 19.2 hours in which to re-establish connectivity before its license e
\*2 Always use `https`, since guest-drivers only support secure connections! \*2 Always use `https`, since guest-drivers only support secure connections!
\*3 If you recreate instance keys you need to **recreate client-token for each guest**! \*3 If you recreate your instance keys you need to **recreate client-token for each guest**!
# Setup (Client) # Setup (Client)
**The token file has to be copied! It's not enough to C&P file contents, because there can be special characters.** **The token file has to be copied! It's not enough to C&P file contents, because there can be special characters.**
Successfully tested with this package versions: This guide does not show how to install vGPU guest drivers! Look at the official documentation packed with the driver
releases.
- `14.3` (Linux-Host: `510.108.03`, Linux-Guest: `510.108.03`, Windows-Guest: `513.91`)
- `14.4` (Linux-Host: `510.108.03`, Linux-Guest: `510.108.03`, Windows-Guest: `514.08`)
- `15.0` (Linux-Host: `525.60.12`, Linux-Guest: `525.60.13`, Windows-Guest: `527.41`)
## Linux ## Linux
Download *client-token* and place it into `/etc/nvidia/ClientConfigToken`:
```shell
curl --insecure -L -X GET https://<dls-hostname-or-ip>/-/client-token -o /etc/nvidia/ClientConfigToken/client_configuration_token_$(date '+%d-%m-%Y-%H-%M-%S').tok
# or
wget --no-check-certificate -O /etc/nvidia/ClientConfigToken/client_configuration_token_$(date '+%d-%m-%Y-%H-%M-%S').tok https://<dls-hostname-or-ip>/-/client-token
```
Restart `nvidia-gridd` service:
```shell ```shell
curl --insecure -L -X GET https://<dls-hostname-or-ip>/client-token -o /etc/nvidia/ClientConfigToken/client_configuration_token_$(date '+%d-%m-%Y-%H-%M-%S').tok
service nvidia-gridd restart service nvidia-gridd restart
```
Check licensing status:
```shell
nvidia-smi -q | grep "License" nvidia-smi -q | grep "License"
``` ```
## Windows Output should be something like:
Download file and place it into `C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken`. ```text
Now restart `NvContainerLocalSystem` service. vGPU Software Licensed Product
License Status : Licensed (Expiry: YYYY-M-DD hh:mm:ss GMT)
**Power-Shell**
```Shell
curl.exe --insecure -L -X GET https://<dls-hostname-or-ip>/client-token -o "C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\client_configuration_token_$($(Get-Date).tostring('dd-MM-yy-hh-mm-ss')).tok"
Restart-Service NVDisplay.ContainerLocalSystem
'C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe' -q | Select-String "License"
``` ```
## Endpoints Done. For more information check [troubleshoot section](#troubleshoot).
### `GET /` ## Windows
**Power-Shell** (run as administrator!)
Download *client-token* and place it into `C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken`:
```shell
curl.exe --insecure -L -X GET https://<dls-hostname-or-ip>/-/client-token -o "C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\client_configuration_token_$($(Get-Date).tostring('dd-MM-yy-hh-mm-ss')).tok"
```
Restart `NvContainerLocalSystem` service:
```Shell
Restart-Service NVDisplay.ContainerLocalSystem
```
Check licensing status:
```shell
& 'nvidia-smi' -q | Select-String "License"
```
Output should be something like:
```text
vGPU Software Licensed Product
License Status : Licensed (Expiry: YYYY-M-DD hh:mm:ss GMT)
```
Done. For more information check [troubleshoot section](#troubleshoot).
## unRAID Guest
1. Make sure you create a folder in a linux filesystem (BTRFS/XFS/EXT4...), I recommend `/mnt/user/system/nvidia` (this is where docker and libvirt preferences are saved, so it's a good place to have that)
2. Edit the script to put your `DLS_IP`, `DLS_PORT` and `TOKEN_PATH`, properly
3. Install `User Scripts` plugin from *Community Apps* (the Apps page, or google User Scripts Unraid if you're not using CA)
4. Go to `Settings > Users Scripts > Add New Script`
5. Give it a name (the name must not contain spaces preferably)
6. Click on the *gear icon* to the left of the script name then edit script
7. Paste the script and save
8. Set schedule to `At First Array Start Only`
9. Click on Apply
# API Endpoints
<details>
<summary>show</summary>
**`GET /`**
Redirect to `/-/readme`. Redirect to `/-/readme`.
### `GET /-/health` **`GET /-/health`**
Status endpoint, used for *healthcheck*. Status endpoint, used for *healthcheck*.
### `GET /-/config` **`GET /-/config`**
Shows current runtime environment variables and their values. Shows current runtime environment variables and their values.
### `GET /-/readme` **`GET /-/readme`**
HTML rendered README.md. HTML rendered README.md.
### `GET /-/docs`, `GET /-/redoc` **`GET /-/manage`**
OpenAPI specifications rendered from `GET /-/openapi.json`.
### `GET /-/manage`
Shows a very basic UI to delete origins or leases. Shows a very basic UI to delete origins or leases.
### `GET /-/origins?leases=false` **`GET /-/origins?leases=false`**
List registered origins. List registered origins.
@@ -319,11 +549,11 @@ List registered origins.
|-----------------|---------|--------------------------------------| |-----------------|---------|--------------------------------------|
| `leases` | `false` | Include referenced leases per origin | | `leases` | `false` | Include referenced leases per origin |
### `DELETE /-/origins` **`DELETE /-/origins`**
Deletes all origins and their leases. Deletes all origins and their leases.
### `GET /-/leases?origin=false` **`GET /-/leases?origin=false`**
List current leases. List current leases.
@@ -331,22 +561,29 @@ List current leases.
|-----------------|---------|-------------------------------------| |-----------------|---------|-------------------------------------|
| `origin` | `false` | Include referenced origin per lease | | `origin` | `false` | Include referenced origin per lease |
### `DELETE /-/lease/{lease_ref}` **`DELETE /-/lease/{lease_ref}`**
Deletes an lease. Deletes an lease.
### `GET /-/client-token` **`GET /-/client-token`**
Generate client token, (see [installation](#installation)). Generate client token, (see [installation](#installation)).
### Others **Others**
There are many other internal api endpoints for handling authentication and lease process. There are many other internal api endpoints for handling authentication and lease process.
</details>
# Troubleshoot # Troubleshoot / Debug
**Please make sure that fastapi-dls and your guests are on the same timezone!** **Please make sure that fastapi-dls and your guests are on the same timezone!**
Maybe you have to disable IPv6 on the machine you are running FastAPI-DLS.
## Docker
Logs are available with `docker logs <container>`. To get the correct container-id use `docker container ls` or `docker ps`.
## Linux ## Linux
Logs are available with `journalctl -u nvidia-gridd -f`. Logs are available with `journalctl -u nvidia-gridd -f`.
@@ -359,9 +596,9 @@ Logs are available in `C:\Users\Public\Documents\Nvidia\LoggingLog.NVDisplay.Con
## Linux ## Linux
### `uvicorn.error:Invalid HTTP request received.` ### Invalid HTTP request
This message can be ignored. This error message: `uvicorn.error:Invalid HTTP request received.` can be ignored.
- Ref. https://github.com/encode/uvicorn/issues/441 - Ref. https://github.com/encode/uvicorn/issues/441
@@ -404,7 +641,7 @@ only
gets a valid local license. gets a valid local license.
<details> <details>
<summary>Log</summary> <summary>Log example</summary>
**Display-Container-LS** **Display-Container-LS**
@@ -470,7 +707,7 @@ The error message can safely be ignored (since we have no license limitation :P)
<0>:End Logging <0>:End Logging
``` ```
#### log with nginx as reverse proxy (see [docker-compose.yml](docker-compose.yml)) #### log with nginx as reverse proxy (see [docker-compose-http-and-https.yml](examples/docker-compose-http-and-https.yml))
``` ```
<1>:NLS initialized <1>:NLS initialized
@@ -487,9 +724,47 @@ The error message can safely be ignored (since we have no license limitation :P)
</details> </details>
# vGPU Software Compatibility Matrix
Successfully tested with this package versions.
| vGPU Suftware | Driver Branch | Linux vGPU Manager | Linux Driver | Windows Driver | Release Date | EOL Date |
|:-------------:|:-------------:|--------------------|--------------|----------------|--------------:|--------------:|
| `17.4` | R550 | `550.127.06` | `550.127.05` | `553.24` | October 2024 | February 2025 |
| `17.3` | R550 | `550.90.05` | `550.90.07` | `552.74` | July 2024 | |
| `17.2` | R550 | `550.90.05` | `550.90.07` | `552.55` | June 2024 | |
| `17.1` | R550 | `550.54.16` | `550.54.15` | `551.78` | March 2024 | |
| `17.0` | R550 | `550.54.10` | `550.54.14` | `551.61` | February 2024 | |
| `16.8` | R535 | `535.216.01` | `535.216.01` | `538.95` | October 2024 | July 2026 |
| `16.7` | R535 | `535.183.04` | `535.183.06` | `538.78` | July 2024 | |
| `16.6` | R535 | `535.183.04` | `535.183.01` | `538.67` | June 2024 | |
| `16.5` | R535 | `535.161.05` | `535.161.08` | `538.46` | February 2024 | |
| `16.4` | R535 | `535.161.05` | `535.161.07` | `538.33` | February 2024 | |
| `16.3` | R535 | `535.154.02` | `535.154.05` | `538.15` | January 2024 | |
| `16.2` | R535 | `535.129.03` | `535.129.03` | `537.70` | October 2023 | |
| `16.1` | R535 | `535.104.06` | `535.104.05` | `537.13` | August 2023 | |
| `16.0` | R535 | `535.54.06` | `535.54.03` | `536.22` | July 2023 | |
| `15.4` | R525 | `525.147.01` | `525.147.05` | `529.19` | June 2023 | December 2023 |
| `14.4` | R510 | `510.108.03` | `510.108.03` | `514.08` | December 2022 | February 2023 |
- https://docs.nvidia.com/grid/index.html
- https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html
*To get the latest drivers, visit Nvidia or search in Discord-Channel `GPU Unlocking` (Server-ID: `829786927829745685`)
on channel `licensing`
# Credits # Credits
Thanks to vGPU community and all who uses this project and report bugs. Thanks to vGPU community and all who uses this project and report bugs.
Special thanks to @samicrusader who created build file for ArchLinux. Special thanks to:
- @samicrusader who created build file for **ArchLinux**
- @cyrus who wrote the section for **openSUSE**
- @midi who wrote the section for **unRAID**
- @polloloco who wrote the *[NVIDIA vGPU Guide](https://gitlab.com/polloloco/vgpu-proxmox)*
- @DualCoder who creates the `vgpu_unlock` functionality [vgpu_unlock](https://github.com/DualCoder/vgpu_unlock)
- Krutav Shah who wrote the [vGPU_Unlock Wiki](https://docs.google.com/document/d/1pzrWJ9h-zANCtyqRgS7Vzla0Y8Ea2-5z2HEi4X75d2Q/)
- Wim van 't Hoog for the [Proxmox All-In-One Installer Script](https://wvthoog.nl/proxmox-vgpu-v3/)
And thanks to all people who contributed to all these libraries!

27
ROADMAP.md Normal file
View File

@@ -0,0 +1,27 @@
# Roadmap
I am planning to implement the following features in the future.
## HA - High Availability
Support Failover-Mode (secondary ip address) as in official DLS.
**Note**: There is no Load-Balancing / Round-Robin HA Mode supported! If you want to use that, consider to use
Docker-Swarm with shared/cluster database (e.g. postgres).
*See [ha branch](https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/-/tree/ha) for current status.*
## UI - User Interface
Add a user interface to manage origins and leases.
*See [ui branch](https://git.collinwebdesigns.de/oscar.krause/fastapi-dls/-/tree/ui) for current status.*
## Config Database
Instead of using environment variables, configuration files and manually create certificates, store configs and
certificates in database (like origins and leases). Also, there should be provided a startup assistant to prefill
required attributes and create instance-certificates. This is more user-friendly and should improve fist setup.

View File

@@ -1,38 +1,42 @@
import logging import logging
from base64 import b64encode as b64enc from base64 import b64encode as b64enc
from calendar import timegm
from contextlib import asynccontextmanager
from datetime import datetime, timedelta
from hashlib import sha256 from hashlib import sha256
from uuid import uuid4 from json import loads as json_loads
from os.path import join, dirname
from os import getenv as env from os import getenv as env
from os.path import join, dirname
from uuid import uuid4
from dateutil.relativedelta import relativedelta
from dotenv import load_dotenv from dotenv import load_dotenv
from fastapi import FastAPI from fastapi import FastAPI
from fastapi.requests import Request from fastapi.requests import Request
from json import loads as json_loads
from datetime import datetime
from dateutil.relativedelta import relativedelta
from calendar import timegm
from jose import jws, jwk, jwt, JWTError from jose import jws, jwk, jwt, JWTError
from jose.constants import ALGORITHMS from jose.constants import ALGORITHMS
from starlette.middleware.cors import CORSMiddleware
from starlette.responses import StreamingResponse, JSONResponse as JSONr, HTMLResponse as HTMLr, Response, RedirectResponse
from sqlalchemy import create_engine from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker from sqlalchemy.orm import sessionmaker
from starlette.middleware.cors import CORSMiddleware
from starlette.responses import StreamingResponse, JSONResponse as JSONr, HTMLResponse as HTMLr, Response, RedirectResponse
from util import load_key, load_file
from orm import Origin, Lease, init as db_init, migrate from orm import Origin, Lease, init as db_init, migrate
from util import load_key, load_file
logger = logging.getLogger() # Load variables
load_dotenv('../version.env') load_dotenv('../version.env')
# Get current timezone
TZ = datetime.now().astimezone().tzinfo
# Load basic variables
VERSION, COMMIT, DEBUG = env('VERSION', 'unknown'), env('COMMIT', 'unknown'), bool(env('DEBUG', False)) VERSION, COMMIT, DEBUG = env('VERSION', 'unknown'), env('COMMIT', 'unknown'), bool(env('DEBUG', False))
config = dict(openapi_url='/-/openapi.json', docs_url='/-/docs', redoc_url='/-/redoc') # Database connection
app = FastAPI(title='FastAPI-DLS', description='Minimal Delegated License Service (DLS).', version=VERSION, **config)
db = create_engine(str(env('DATABASE', 'sqlite:///db.sqlite'))) db = create_engine(str(env('DATABASE', 'sqlite:///db.sqlite')))
db_init(db), migrate(db) db_init(db), migrate(db)
# everything prefixed with "INSTANCE_*" is used as "SERVICE_INSTANCE_*" or "SI_*" in official dls service # Load DLS variables (all prefixed with "INSTANCE_*" is used as "SERVICE_INSTANCE_*" or "SI_*" in official dls service)
DLS_URL = str(env('DLS_URL', 'localhost')) DLS_URL = str(env('DLS_URL', 'localhost'))
DLS_PORT = int(env('DLS_PORT', '443')) DLS_PORT = int(env('DLS_PORT', '443'))
SITE_KEY_XID = str(env('SITE_KEY_XID', '00000000-0000-0000-0000-000000000000')) SITE_KEY_XID = str(env('SITE_KEY_XID', '00000000-0000-0000-0000-000000000000'))
@@ -43,11 +47,47 @@ INSTANCE_KEY_PUB = load_key(str(env('INSTANCE_KEY_PUB', join(dirname(__file__),
TOKEN_EXPIRE_DELTA = relativedelta(days=int(env('TOKEN_EXPIRE_DAYS', 1)), hours=int(env('TOKEN_EXPIRE_HOURS', 0))) TOKEN_EXPIRE_DELTA = relativedelta(days=int(env('TOKEN_EXPIRE_DAYS', 1)), hours=int(env('TOKEN_EXPIRE_HOURS', 0)))
LEASE_EXPIRE_DELTA = relativedelta(days=int(env('LEASE_EXPIRE_DAYS', 90)), hours=int(env('LEASE_EXPIRE_HOURS', 0))) LEASE_EXPIRE_DELTA = relativedelta(days=int(env('LEASE_EXPIRE_DAYS', 90)), hours=int(env('LEASE_EXPIRE_HOURS', 0)))
LEASE_RENEWAL_PERIOD = float(env('LEASE_RENEWAL_PERIOD', 0.15)) LEASE_RENEWAL_PERIOD = float(env('LEASE_RENEWAL_PERIOD', 0.15))
LEASE_RENEWAL_DELTA = timedelta(days=int(env('LEASE_EXPIRE_DAYS', 90)), hours=int(env('LEASE_EXPIRE_HOURS', 0)))
CLIENT_TOKEN_EXPIRE_DELTA = relativedelta(years=12)
CORS_ORIGINS = str(env('CORS_ORIGINS', '')).split(',') if (env('CORS_ORIGINS')) else [f'https://{DLS_URL}'] CORS_ORIGINS = str(env('CORS_ORIGINS', '')).split(',') if (env('CORS_ORIGINS')) else [f'https://{DLS_URL}']
jwt_encode_key = jwk.construct(INSTANCE_KEY_RSA.export_key().decode('utf-8'), algorithm=ALGORITHMS.RS256) jwt_encode_key = jwk.construct(INSTANCE_KEY_RSA.export_key().decode('utf-8'), algorithm=ALGORITHMS.RS256)
jwt_decode_key = jwk.construct(INSTANCE_KEY_PUB.export_key().decode('utf-8'), algorithm=ALGORITHMS.RS256) jwt_decode_key = jwk.construct(INSTANCE_KEY_PUB.export_key().decode('utf-8'), algorithm=ALGORITHMS.RS256)
# Logging
LOG_LEVEL = logging.DEBUG if DEBUG else logging.INFO
logging.basicConfig(format='[{levelname:^7}] [{module:^15}] {message}', style='{')
logger = logging.getLogger(__name__)
logger.setLevel(LOG_LEVEL)
logging.getLogger('util').setLevel(LOG_LEVEL)
logging.getLogger('NV').setLevel(LOG_LEVEL)
# FastAPI
@asynccontextmanager
async def lifespan(_: FastAPI):
# on startup
logger.info(f'''
Using timezone: {str(TZ)}. Make sure this is correct and match your clients!
Your clients renew their license every {str(Lease.calculate_renewal(LEASE_RENEWAL_PERIOD, LEASE_RENEWAL_DELTA))}.
If the renewal fails, the license is {str(LEASE_RENEWAL_DELTA)} valid.
Your client-token file (.tok) is valid for {str(CLIENT_TOKEN_EXPIRE_DELTA)}.
''')
logger.info(f'Debug is {"enabled" if DEBUG else "disabled"}.')
yield
# on shutdown
logger.info(f'Shutting down ...')
config = dict(openapi_url=None, docs_url=None, redoc_url=None) # dict(openapi_url='/-/openapi.json', docs_url='/-/docs', redoc_url='/-/redoc')
app = FastAPI(title='FastAPI-DLS', description='Minimal Delegated License Service (DLS).', version=VERSION, lifespan=lifespan, **config)
app.debug = DEBUG app.debug = DEBUG
app.add_middleware( app.add_middleware(
CORSMiddleware, CORSMiddleware,
@@ -56,16 +96,22 @@ app.add_middleware(
allow_methods=['*'], allow_methods=['*'],
allow_headers=['*'], allow_headers=['*'],
) )
if bool(env('SUPPORT_MALFORMED_JSON', False)):
from middleware import PatchMalformedJsonMiddleware
logger.setLevel(logging.DEBUG if DEBUG else logging.INFO) logger.info(f'Enabled "PatchMalformedJsonMiddleware"!')
app.add_middleware(PatchMalformedJsonMiddleware, enabled=True)
# Helper
def __get_token(request: Request) -> dict: def __get_token(request: Request) -> dict:
authorization_header = request.headers.get('authorization') authorization_header = request.headers.get('authorization')
token = authorization_header.split(' ')[1] token = authorization_header.split(' ')[1]
return jwt.decode(token=token, key=jwt_decode_key, algorithms=ALGORITHMS.RS256, options={'verify_aud': False}) return jwt.decode(token=token, key=jwt_decode_key, algorithms=ALGORITHMS.RS256, options={'verify_aud': False})
# Endpoints
@app.get('/', summary='Index') @app.get('/', summary='Index')
async def index(): async def index():
return RedirectResponse('/-/readme') return RedirectResponse('/-/readme')
@@ -77,7 +123,7 @@ async def _index():
@app.get('/-/health', summary='* Health') @app.get('/-/health', summary='* Health')
async def _health(request: Request): async def _health():
return JSONr({'status': 'up'}) return JSONr({'status': 'up'})
@@ -96,13 +142,14 @@ async def _config():
'LEASE_EXPIRE_DELTA': str(LEASE_EXPIRE_DELTA), 'LEASE_EXPIRE_DELTA': str(LEASE_EXPIRE_DELTA),
'LEASE_RENEWAL_PERIOD': str(LEASE_RENEWAL_PERIOD), 'LEASE_RENEWAL_PERIOD': str(LEASE_RENEWAL_PERIOD),
'CORS_ORIGINS': str(CORS_ORIGINS), 'CORS_ORIGINS': str(CORS_ORIGINS),
'TZ': str(TZ),
}) })
@app.get('/-/readme', summary='* Readme') @app.get('/-/readme', summary='* Readme')
async def _readme(): async def _readme():
from markdown import markdown from markdown import markdown
content = load_file('../README.md').decode('utf-8') content = load_file(join(dirname(__file__), '../README.md')).decode('utf-8')
return HTMLr(markdown(text=content, extensions=['tables', 'fenced_code', 'md_in_html', 'nl2br', 'toc'])) return HTMLr(markdown(text=content, extensions=['tables', 'fenced_code', 'md_in_html', 'nl2br', 'toc']))
@@ -151,7 +198,8 @@ async def _origins(request: Request, leases: bool = False):
for origin in session.query(Origin).all(): for origin in session.query(Origin).all():
x = origin.serialize() x = origin.serialize()
if leases: if leases:
x['leases'] = list(map(lambda _: _.serialize(), Lease.find_by_origin_ref(db, origin.origin_ref))) serialize = dict(renewal_period=LEASE_RENEWAL_PERIOD, renewal_delta=LEASE_RENEWAL_DELTA)
x['leases'] = list(map(lambda _: _.serialize(**serialize), Lease.find_by_origin_ref(db, origin.origin_ref)))
response.append(x) response.append(x)
session.close() session.close()
return JSONr(response) return JSONr(response)
@@ -168,7 +216,8 @@ async def _leases(request: Request, origin: bool = False):
session = sessionmaker(bind=db)() session = sessionmaker(bind=db)()
response = [] response = []
for lease in session.query(Lease).all(): for lease in session.query(Lease).all():
x = lease.serialize() serialize = dict(renewal_period=LEASE_RENEWAL_PERIOD, renewal_delta=LEASE_RENEWAL_DELTA)
x = lease.serialize(**serialize)
if origin: if origin:
lease_origin = session.query(Origin).filter(Origin.origin_ref == lease.origin_ref).first() lease_origin = session.query(Origin).filter(Origin.origin_ref == lease.origin_ref).first()
if lease_origin is not None: if lease_origin is not None:
@@ -178,6 +227,12 @@ async def _leases(request: Request, origin: bool = False):
return JSONr(response) return JSONr(response)
@app.delete('/-/leases/expired', summary='* Leases')
async def _lease_delete_expired(request: Request):
Lease.delete_expired(db)
return Response(status_code=201)
@app.delete('/-/lease/{lease_ref}', summary='* Lease') @app.delete('/-/lease/{lease_ref}', summary='* Lease')
async def _lease_delete(request: Request, lease_ref: str): async def _lease_delete(request: Request, lease_ref: str):
if Lease.delete(db, lease_ref) == 1: if Lease.delete(db, lease_ref) == 1:
@@ -189,7 +244,7 @@ async def _lease_delete(request: Request, lease_ref: str):
@app.get('/-/client-token', summary='* Client-Token', description='creates a new messenger token for this service instance') @app.get('/-/client-token', summary='* Client-Token', description='creates a new messenger token for this service instance')
async def _client_token(): async def _client_token():
cur_time = datetime.utcnow() cur_time = datetime.utcnow()
exp_time = cur_time + relativedelta(years=12) exp_time = cur_time + CLIENT_TOKEN_EXPIRE_DELTA
payload = { payload = {
"jti": str(uuid4()), "jti": str(uuid4()),

43
app/middleware.py Normal file
View File

@@ -0,0 +1,43 @@
import json
import logging
import re
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.requests import Request
logger = logging.getLogger(__name__)
class PatchMalformedJsonMiddleware(BaseHTTPMiddleware):
# see oscar.krause/fastapi-dls#1
REGEX = '(\"mac_address_list\"\:\s?\[)([\w\d])'
def __init__(self, app, enabled: bool):
super().__init__(app)
self.enabled = enabled
async def dispatch(self, request: Request, call_next):
body = await request.body()
content_type = request.headers.get('Content-Type')
if self.enabled and content_type == 'application/json':
body = body.decode()
try:
json.loads(body)
except json.decoder.JSONDecodeError:
logger.warning(f'Malformed json received! Try to fix it, "PatchMalformedJsonMiddleware" is enabled.')
s = PatchMalformedJsonMiddleware.fix_json(body)
logger.debug(f'Fixed JSON: "{s}"')
s = json.loads(s) # ensure json is now valid
# set new body
request._body = json.dumps(s).encode('utf-8')
response = await call_next(request)
return response
@staticmethod
def fix_json(s: str) -> str:
s = s.replace('\t', '')
s = s.replace('\n', '')
return re.sub(PatchMalformedJsonMiddleware.REGEX, r'\1"\2', s)

View File

@@ -1,9 +1,11 @@
import datetime from datetime import datetime, timedelta, timezone
from sqlalchemy import Column, VARCHAR, CHAR, ForeignKey, DATETIME, update, and_, inspect from dateutil.relativedelta import relativedelta
from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, VARCHAR, CHAR, ForeignKey, DATETIME, update, and_, inspect, text
from sqlalchemy.engine import Engine from sqlalchemy.engine import Engine
from sqlalchemy.orm import sessionmaker from sqlalchemy.orm import sessionmaker, declarative_base
from util import NV
Base = declarative_base() Base = declarative_base()
@@ -23,6 +25,8 @@ class Origin(Base):
return f'Origin(origin_ref={self.origin_ref}, hostname={self.hostname})' return f'Origin(origin_ref={self.origin_ref}, hostname={self.hostname})'
def serialize(self) -> dict: def serialize(self) -> dict:
_ = NV().find(self.guest_driver_version)
return { return {
'origin_ref': self.origin_ref, 'origin_ref': self.origin_ref,
# 'service_instance_xid': self.service_instance_xid, # 'service_instance_xid': self.service_instance_xid,
@@ -30,6 +34,7 @@ class Origin(Base):
'guest_driver_version': self.guest_driver_version, 'guest_driver_version': self.guest_driver_version,
'os_platform': self.os_platform, 'os_platform': self.os_platform,
'os_version': self.os_version, 'os_version': self.os_version,
'$driver': _ if _ is not None else None,
} }
@staticmethod @staticmethod
@@ -56,12 +61,22 @@ class Origin(Base):
session.close() session.close()
@staticmethod @staticmethod
def delete(engine: Engine, origins: ["Origin"] = None) -> int: def delete(engine: Engine, origin_refs: [str] = None) -> int:
session = sessionmaker(bind=engine)() session = sessionmaker(bind=engine)()
if origins is None: if origin_refs is None:
deletions = session.query(Origin).delete() deletions = session.query(Origin).delete()
else: else:
deletions = session.query(Origin).filter(Origin.origin_ref in origins).delete() deletions = session.query(Origin).filter(Origin.origin_ref.in_(origin_refs)).delete()
session.commit()
session.close()
return deletions
@staticmethod
def delete_expired(engine: Engine) -> int:
session = sessionmaker(bind=engine)()
origins = session.query(Origin).join(Lease, Origin.origin_ref == Lease.origin_ref, isouter=True).filter(Lease.lease_ref.is_(None)).all()
origin_refs = [origin.origin_ref for origin in origins]
deletions = session.query(Origin).filter(Origin.origin_ref.in_(origin_refs)).delete()
session.commit() session.commit()
session.close() session.close()
return deletions return deletions
@@ -81,14 +96,18 @@ class Lease(Base):
def __repr__(self): def __repr__(self):
return f'Lease(origin_ref={self.origin_ref}, lease_ref={self.lease_ref}, expires={self.lease_expires})' return f'Lease(origin_ref={self.origin_ref}, lease_ref={self.lease_ref}, expires={self.lease_expires})'
def serialize(self) -> dict: def serialize(self, renewal_period: float, renewal_delta: timedelta) -> dict:
lease_renewal = int(Lease.calculate_renewal(renewal_period, renewal_delta).total_seconds())
lease_renewal = self.lease_updated + relativedelta(seconds=lease_renewal)
return { return {
'lease_ref': self.lease_ref, 'lease_ref': self.lease_ref,
'origin_ref': self.origin_ref, 'origin_ref': self.origin_ref,
# 'scope_ref': self.scope_ref, # 'scope_ref': self.scope_ref,
'lease_created': self.lease_created.isoformat(), 'lease_created': self.lease_created.replace(tzinfo=timezone.utc).isoformat(),
'lease_expires': self.lease_expires.isoformat(), 'lease_expires': self.lease_expires.replace(tzinfo=timezone.utc).isoformat(),
'lease_updated': self.lease_updated.isoformat(), 'lease_updated': self.lease_updated.replace(tzinfo=timezone.utc).isoformat(),
'lease_renewal': lease_renewal.replace(tzinfo=timezone.utc).isoformat(),
} }
@staticmethod @staticmethod
@@ -133,7 +152,7 @@ class Lease(Base):
return entity return entity
@staticmethod @staticmethod
def renew(engine: Engine, lease: "Lease", lease_expires: datetime.datetime, lease_updated: datetime.datetime): def renew(engine: Engine, lease: "Lease", lease_expires: datetime, lease_updated: datetime):
session = sessionmaker(bind=engine)() session = sessionmaker(bind=engine)()
x = dict(lease_expires=lease_expires, lease_updated=lease_updated) x = dict(lease_expires=lease_expires, lease_updated=lease_updated)
session.execute(update(Lease).where(and_(Lease.origin_ref == lease.origin_ref, Lease.lease_ref == lease.lease_ref)).values(**x)) session.execute(update(Lease).where(and_(Lease.origin_ref == lease.origin_ref, Lease.lease_ref == lease.lease_ref)).values(**x))
@@ -156,6 +175,36 @@ class Lease(Base):
session.close() session.close()
return deletions return deletions
@staticmethod
def delete_expired(engine: Engine) -> int:
session = sessionmaker(bind=engine)()
deletions = session.query(Lease).filter(Lease.lease_expires <= datetime.utcnow()).delete()
session.commit()
session.close()
return deletions
@staticmethod
def calculate_renewal(renewal_period: float, delta: timedelta) -> timedelta:
"""
import datetime
LEASE_RENEWAL_PERIOD=0.2 # 20%
delta = datetime.timedelta(days=1)
renew = delta.total_seconds() * LEASE_RENEWAL_PERIOD
renew = datetime.timedelta(seconds=renew)
expires = delta - renew # 19.2
import datetime
LEASE_RENEWAL_PERIOD=0.15 # 15%
delta = datetime.timedelta(days=90)
renew = delta.total_seconds() * LEASE_RENEWAL_PERIOD
renew = datetime.timedelta(seconds=renew)
expires = delta - renew # 76 days, 12:00:00 hours
"""
renew = delta.total_seconds() * renewal_period
renew = timedelta(seconds=renew)
return renew
def init(engine: Engine): def init(engine: Engine):
tables = [Origin, Lease] tables = [Origin, Lease]
@@ -163,7 +212,7 @@ def init(engine: Engine):
session = sessionmaker(bind=engine)() session = sessionmaker(bind=engine)()
for table in tables: for table in tables:
if not db.dialect.has_table(engine.connect(), table.__tablename__): if not db.dialect.has_table(engine.connect(), table.__tablename__):
session.execute(str(table.create_statement(engine))) session.execute(text(str(table.create_statement(engine))))
session.commit() session.commit()
session.close() session.close()

View File

@@ -1,10 +1,17 @@
def load_file(filename) -> bytes: import logging
logging.basicConfig()
def load_file(filename: str) -> bytes:
log = logging.getLogger(f'{__name__}')
log.debug(f'Loading contents of file "{filename}')
with open(filename, 'rb') as file: with open(filename, 'rb') as file:
content = file.read() content = file.read()
return content return content
def load_key(filename) -> "RsaKey": def load_key(filename: str) -> "RsaKey":
try: try:
# Crypto | Cryptodome on Debian # Crypto | Cryptodome on Debian
from Crypto.PublicKey import RSA from Crypto.PublicKey import RSA
@@ -13,6 +20,8 @@ def load_key(filename) -> "RsaKey":
from Cryptodome.PublicKey import RSA from Cryptodome.PublicKey import RSA
from Cryptodome.PublicKey.RSA import RsaKey from Cryptodome.PublicKey.RSA import RsaKey
log = logging.getLogger(__name__)
log.debug(f'Importing RSA-Key from "{filename}"')
return RSA.import_key(extern_key=load_file(filename), passphrase=None) return RSA.import_key(extern_key=load_file(filename), passphrase=None)
@@ -24,5 +33,50 @@ def generate_key() -> "RsaKey":
except ModuleNotFoundError: except ModuleNotFoundError:
from Cryptodome.PublicKey import RSA from Cryptodome.PublicKey import RSA
from Cryptodome.PublicKey.RSA import RsaKey from Cryptodome.PublicKey.RSA import RsaKey
log = logging.getLogger(__name__)
log.debug(f'Generating RSA-Key')
return RSA.generate(bits=2048) return RSA.generate(bits=2048)
class NV:
__DRIVER_MATRIX_FILENAME = 'static/driver_matrix.json'
__DRIVER_MATRIX: None | dict = None # https://docs.nvidia.com/grid/ => "Driver Versions"
def __init__(self):
self.log = logging.getLogger(self.__class__.__name__)
if NV.__DRIVER_MATRIX is None:
from json import load as json_load
try:
file = open(NV.__DRIVER_MATRIX_FILENAME)
NV.__DRIVER_MATRIX = json_load(file)
file.close()
self.log.debug(f'Successfully loaded "{NV.__DRIVER_MATRIX_FILENAME}".')
except Exception as e:
NV.__DRIVER_MATRIX = {} # init empty dict to not try open file everytime, just when restarting app
# self.log.warning(f'Failed to load "{NV.__DRIVER_MATRIX_FILENAME}": {e}')
@staticmethod
def find(version: str) -> dict | None:
if NV.__DRIVER_MATRIX is None:
return None
for idx, (key, branch) in enumerate(NV.__DRIVER_MATRIX.items()):
for release in branch.get('$releases'):
linux_driver = release.get('Linux Driver')
windows_driver = release.get('Windows Driver')
if version == linux_driver or version == windows_driver:
tmp = branch.copy()
tmp.pop('$releases')
is_latest = release.get('vGPU Software') == branch.get('Latest Release in Branch')
return {
'software_branch': branch.get('vGPU Software Branch'),
'branch_version': release.get('vGPU Software'),
'driver_branch': branch.get('Driver Branch'),
'branch_status': branch.get('vGPU Branch Status'),
'release_date': release.get('Release Date'),
'eol': branch.get('EOL Date') if is_latest else None,
'is_latest': is_latest,
}
return None

View File

@@ -1,9 +1,10 @@
version: '3.9' version: '3.9'
x-dls-variables: &dls-variables x-dls-variables: &dls-variables
DLS_URL: localhost # REQUIRED, change to your ip or hostname TZ: Europe/Berlin # REQUIRED, set your timezone correctly on fastapi-dls AND YOUR CLIENTS !!!
DLS_PORT: 443 # must match nginx listen & exposed port DLS_URL: localhost # REQUIRED, change to your ip or hostname
LEASE_EXPIRE_DAYS: 90 DLS_PORT: 443
LEASE_EXPIRE_DAYS: 90 # 90 days is maximum
DATABASE: sqlite:////app/database/db.sqlite DATABASE: sqlite:////app/database/db.sqlite
DEBUG: false DEBUG: false
@@ -13,106 +14,16 @@ services:
restart: always restart: always
environment: environment:
<<: *dls-variables <<: *dls-variables
volumes:
- /opt/docker/fastapi-dls/cert:/app/cert # instance.private.pem, instance.public.pem
- db:/app/database
entrypoint: ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--app-dir", "/app", "--proxy-headers"]
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8000/-/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
proxy:
image: nginx
ports: ports:
# thees are ports where nginx (!) is listen to - "443:443"
- "80:80" # for "/leasing/v1/lessor/shutdown" used by windows guests, can't be changed!
- "443:443" # first part must match "DLS_PORT"
volumes: volumes:
- /opt/docker/fastapi-dls/cert:/opt/cert - /opt/docker/fastapi-dls/cert:/app/cert
healthcheck: - dls-db:/app/database
test: ["CMD", "curl", "--insecure", "--fail", "https://localhost/-/health"] logging: # optional, for those who do not need logs
interval: 10s driver: "json-file"
timeout: 5s options:
retries: 3 max-file: 5
start_period: 30s max-size: 10m
command: |
bash -c "bash -s <<\"EOF\"
cat > /etc/nginx/nginx.conf <<\"EON\"
daemon off;
user root;
worker_processes auto;
events {
worker_connections 1024;
}
http {
gzip on;
gzip_disable "msie6";
include /etc/nginx/mime.types;
upstream dls-backend {
server dls:8000; # must match dls listen port
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
root /var/www/html;
index index.html;
server_name _;
ssl_certificate "/opt/cert/webserver.crt";
ssl_certificate_key "/opt/cert/webserver.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.3 TLSv1.2;
# ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305";
# ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $$scheme;
proxy_pass http://dls-backend$$request_uri;
}
location = /-/health {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{\"status\":\"up\",\"service\":\"nginx\"}';
}
}
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html;
server_name _;
location /leasing/v1/lessor/shutdown {
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $$scheme;
proxy_pass http://dls-backend/leasing/v1/lessor/shutdown;
}
location / {
return 301 https://$$host$$request_uri;
}
}
}
EON
nginx
EOF"
volumes: volumes:
db: dls-db:

View File

@@ -0,0 +1,120 @@
version: '3.9'
x-dls-variables: &dls-variables
DLS_URL: localhost # REQUIRED, change to your ip or hostname
DLS_PORT: 443 # must match nginx listen & exposed port
LEASE_EXPIRE_DAYS: 90
DATABASE: sqlite:////app/database/db.sqlite
DEBUG: false
services:
dls:
image: collinwebdesigns/fastapi-dls:latest
restart: always
environment:
<<: *dls-variables
volumes:
- /etc/timezone:/etc/timezone:ro
- /opt/docker/fastapi-dls/cert:/app/cert # instance.private.pem, instance.public.pem
- db:/app/database
entrypoint: ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--app-dir", "/app", "--proxy-headers"]
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8000/-/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
proxy:
image: nginx
ports:
# thees are ports where nginx (!) is listen to
- "80:80" # for "/leasing/v1/lessor/shutdown" used by windows guests, can't be changed!
- "443:443" # first part must match "DLS_PORT"
volumes:
- /etc/timezone:/etc/timezone:ro
- /opt/docker/fastapi-dls/cert:/opt/cert
healthcheck:
test: ["CMD", "curl", "--insecure", "--fail", "https://localhost/-/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
command: |
bash -c "bash -s <<\"EOF\"
cat > /etc/nginx/nginx.conf <<\"EON\"
daemon off;
user root;
worker_processes auto;
events {
worker_connections 1024;
}
http {
gzip on;
gzip_disable "msie6";
include /etc/nginx/mime.types;
upstream dls-backend {
server dls:8000; # must match dls listen port
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
root /var/www/html;
index index.html;
server_name _;
ssl_certificate "/opt/cert/webserver.crt";
ssl_certificate_key "/opt/cert/webserver.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.3 TLSv1.2;
# ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305";
# ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $$scheme;
proxy_pass http://dls-backend$$request_uri;
}
location = /-/health {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{\"status\":\"up\",\"service\":\"nginx\"}';
}
}
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html;
server_name _;
location /leasing/v1/lessor/shutdown {
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $$scheme;
proxy_pass http://dls-backend/leasing/v1/lessor/shutdown;
}
location / {
return 301 https://$$host$$request_uri;
}
}
}
EON
nginx
EOF"
volumes:
db:

View File

@@ -1,8 +1,8 @@
fastapi==0.89.1 fastapi==0.115.5
uvicorn[standard]==0.20.0 uvicorn[standard]==0.32.0
python-jose==3.3.0 python-jose==3.3.0
pycryptodome==3.16.0 pycryptodome==3.21.0
python-dateutil==2.8.2 python-dateutil==2.8.2
sqlalchemy==1.4.46 sqlalchemy==2.0.36
markdown==3.4.1 markdown==3.7
python-dotenv==0.21.0 python-dotenv==1.0.1

View File

@@ -0,0 +1,137 @@
import logging
logging.basicConfig()
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
URL = 'https://docs.nvidia.com/vgpu/index.html'
BRANCH_STATUS_KEY, SOFTWARE_BRANCH_KEY, = 'vGPU Branch Status', 'vGPU Software Branch'
VGPU_KEY, GRID_KEY, DRIVER_BRANCH_KEY = 'vGPU Software', 'vGPU Software', 'Driver Branch'
LINUX_VGPU_MANAGER_KEY, LINUX_DRIVER_KEY = 'Linux vGPU Manager', 'Linux Driver'
WINDOWS_VGPU_MANAGER_KEY, WINDOWS_DRIVER_KEY = 'Windows vGPU Manager', 'Windows Driver'
ALT_VGPU_MANAGER_KEY = 'vGPU Manager'
RELEASE_DATE_KEY, LATEST_KEY, EOL_KEY = 'Release Date', 'Latest Release in Branch', 'EOL Date'
JSON_RELEASES_KEY = '$releases'
def __driver_versions(html: 'BeautifulSoup'):
def __strip(_: str) -> str:
# removes content after linebreak (e.g. "Hello\n World" to "Hello")
_ = _.strip()
tmp = _.split('\n')
if len(tmp) > 0:
return tmp[0]
return _
# find wrapper for "DriverVersions" and find tables
data = html.find('div', {'id': 'driver-versions'})
items = data.findAll('bsp-accordion', {'class': 'Accordion-items-item'})
for item in items:
software_branch = item.find('div', {'class': 'Accordion-items-item-title'}).text.strip()
software_branch = software_branch.replace(' Releases', '')
matrix_key = software_branch.lower()
# driver version info from table-heads (ths) and table-rows (trs)
table = item.find('table')
ths, trs = table.find_all('th'), table.find_all('tr')
headers, releases = [header.text.strip() for header in ths], []
for trs in trs:
tds = trs.find_all('td')
if len(tds) == 0: # skip empty
continue
# create dict with table-heads as key and cell content as value
x = {headers[i]: __strip(cell.text) for i, cell in enumerate(tds)}
releases.append(x)
# add to matrix
MATRIX.update({matrix_key: {JSON_RELEASES_KEY: releases}})
def __release_branches(html: 'BeautifulSoup'):
# find wrapper for "AllReleaseBranches" and find table
data = html.find('div', {'id': 'all-release-branches'})
table = data.find('table')
# branch releases info from table-heads (ths) and table-rows (trs)
ths, trs = table.find_all('th'), table.find_all('tr')
headers = [header.text.strip() for header in ths]
for trs in trs:
tds = trs.find_all('td')
if len(tds) == 0: # skip empty
continue
# create dict with table-heads as key and cell content as value
x = {headers[i]: cell.text.strip() for i, cell in enumerate(tds)}
# get matrix_key
software_branch = x.get(SOFTWARE_BRANCH_KEY)
matrix_key = software_branch.lower()
# add to matrix
MATRIX.update({matrix_key: MATRIX.get(matrix_key) | x})
def __debug():
# print table head
s = f'{SOFTWARE_BRANCH_KEY:^21} | {BRANCH_STATUS_KEY:^21} | {VGPU_KEY:^13} | {LINUX_VGPU_MANAGER_KEY:^21} | {LINUX_DRIVER_KEY:^21} | {WINDOWS_VGPU_MANAGER_KEY:^21} | {WINDOWS_DRIVER_KEY:^21} | {RELEASE_DATE_KEY:>21} | {EOL_KEY:>21}'
print(s)
# iterate over dict & format some variables to not overload table
for idx, (key, branch) in enumerate(MATRIX.items()):
branch_status = branch.get(BRANCH_STATUS_KEY)
branch_status = branch_status.replace('Branch ', '')
branch_status = branch_status.replace('Long-Term Support', 'LTS')
branch_status = branch_status.replace('Production', 'Prod.')
software_branch = branch.get(SOFTWARE_BRANCH_KEY).replace('NVIDIA ', '')
for release in branch.get(JSON_RELEASES_KEY):
version = release.get(VGPU_KEY, release.get(GRID_KEY, ''))
linux_manager = release.get(LINUX_VGPU_MANAGER_KEY, release.get(ALT_VGPU_MANAGER_KEY, ''))
linux_driver = release.get(LINUX_DRIVER_KEY)
windows_manager = release.get(WINDOWS_VGPU_MANAGER_KEY, release.get(ALT_VGPU_MANAGER_KEY, ''))
windows_driver = release.get(WINDOWS_DRIVER_KEY)
release_date = release.get(RELEASE_DATE_KEY)
is_latest = release.get(VGPU_KEY) == branch.get(LATEST_KEY)
version = f'{version} *' if is_latest else version
eol = branch.get(EOL_KEY) if is_latest else ''
s = f'{software_branch:^21} | {branch_status:^21} | {version:<13} | {linux_manager:<21} | {linux_driver:<21} | {windows_manager:<21} | {windows_driver:<21} | {release_date:>21} | {eol:>21}'
print(s)
def __dump(filename: str):
import json
file = open(filename, 'w')
json.dump(MATRIX, file)
file.close()
if __name__ == '__main__':
MATRIX = {}
try:
import httpx
from bs4 import BeautifulSoup
except Exception as e:
logger.error(f'Failed to import module: {e}')
logger.info('Run "pip install beautifulsoup4 httpx"')
exit(1)
r = httpx.get(URL)
if r.status_code != 200:
logger.error(f'Error loading "{URL}" with status code {r.status_code}.')
exit(2)
# parse html
soup = BeautifulSoup(r.text, features='html.parser')
# build matrix
__driver_versions(soup)
__release_branches(soup)
# debug output
__debug()
# dump data to file
__dump('../app/static/driver_matrix.json')

View File

@@ -1,7 +1,8 @@
import sys
from base64 import b64encode as b64enc from base64 import b64encode as b64enc
from hashlib import sha256
from calendar import timegm from calendar import timegm
from datetime import datetime from datetime import datetime
from hashlib import sha256
from os.path import dirname, join from os.path import dirname, join
from uuid import uuid4, UUID from uuid import uuid4, UUID
@@ -9,7 +10,6 @@ from dateutil.relativedelta import relativedelta
from jose import jwt, jwk from jose import jwt, jwk
from jose.constants import ALGORITHMS from jose.constants import ALGORITHMS
from starlette.testclient import TestClient from starlette.testclient import TestClient
import sys
# add relative path to use packages as they were in the app/ dir # add relative path to use packages as they were in the app/ dir
sys.path.append('../') sys.path.append('../')
@@ -18,6 +18,7 @@ sys.path.append('../app')
from app import main from app import main
from app.util import load_key from app.util import load_key
# main.app.add_middleware(PatchMalformedJsonMiddleware, enabled=True)
client = TestClient(main.app) client = TestClient(main.app)
ORIGIN_REF, ALLOTMENT_REF, SECRET = str(uuid4()), '20000000-0000-0000-0000-000000000001', 'HelloWorld' ORIGIN_REF, ALLOTMENT_REF, SECRET = str(uuid4()), '20000000-0000-0000-0000-000000000001', 'HelloWorld'
@@ -106,6 +107,15 @@ def test_auth_v1_origin():
assert response.json().get('origin_ref') == ORIGIN_REF assert response.json().get('origin_ref') == ORIGIN_REF
def test_auth_v1_origin_malformed_json(): # see oscar.krause/fastapi-dls#1
from middleware import PatchMalformedJsonMiddleware
# test regex (temporary, until this section is merged into main.py
s = '{"environment": {"fingerprint": {"mac_address_list": [ff:ff:ff:ff:ff:ff"]}}'
replaced = PatchMalformedJsonMiddleware.fix_json(s)
assert replaced == '{"environment": {"fingerprint": {"mac_address_list": ["ff:ff:ff:ff:ff:ff"]}}'
def auth_v1_origin_update(): def auth_v1_origin_update():
payload = { payload = {
"registration_pending": False, "registration_pending": False,

View File

@@ -1 +0,0 @@
VERSION=1.3.2