Cool project! Setting up a quick local HTML server can be annoying.
Alas it looks like it's web/electron based. :/ Downloading it and yep, 443.8 MB on MacOS. The Linux one is a bit better at 183.3 MB.
Electron really should get a kickback from disk manufacturers! ;)
Shameless plug, I've been working on a HTML inspired lightweight UI toolkit because I'm convinced we can make these sort of apps and they should only be ~10-20 MB [1] with nice syntax, animation, theming, etc. I'm finally making a suite of widgets. Maybe I can make a basic clone of this! Bet I could get in < 10MB. :)
I use python for serving my local static site development with this custom little bash wrapper script I wrote:
#!/usr/bin/env bash
set -e; [[ $TRACE ]] && set -x
port=8080
dir="."
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
echo "usage: http-server [PORT] [DIRECTORY]"
echo " PORT Port to listen on (default: 8080)"
echo " DIRECTORY Directory to serve (default: .)"
exit 0
fi
if [ -n "$1" ]; then
port=$1
fi
if [[ -n "$2" ]]; then
dir=$2
fi
python3 -m http.server --directory "$dir" --protocol HTTP/1.1 "$port"
For anyone baffled by this: This works because HTTP/0.9 (just called "HTTP" at the time) worked in an extremely simple way, and browsers mostly retained compatibility for this.
HTTP/0.9 web browser sends:
GET /
Netcat sends:
<!doctype html>
...
Nowadays a browser will send `GET / HTTP/1.1` and then a bunch of headers, which a true HTTP/0.9 server may be able to filter out and ignore, but of course this script will just send the document and the browser will still assume it's a legacy server.
It's absolutely (almost) correct! HTTP/0.9 does not require you to send back a status code or any headers. Some modern web servers even recognise a lone "GET /" to mean HTTP/0.9 and will respond accordingly.
This is exactly my point - it successfully accomplishes a very specific task, in a way that is fragile and context dependent, and completely fails to handle any errors or edge cases, or reckon with any complexity at all.
Random tangent: It appears that most of Electron's funding is actually the German government.
The Sovereign Tech Agency, under the Federal Ministry for Economic Affairs and Climate Action, fund OpenJS, specifically for improving the state of open source in JavaScript.
I like and understand the paradigm of using web technologies to build GUI apps. I have yet to find any desktop framework that even comes close to the DX of using web tech.
I recently explored both Tauri [1] and Wails [2].
Especially Wails is lots of fun. The simplicity of go paired with the fast iteration I get from using React just feels awesome.
And the final application is ~10 MB in size!
In SW development you need to make compromises. You can not have all of: quality, performance, memory/cpu/disk efficiency, security, development speed, low effort, cross-platform app, accessibility, all the business features, etc. Which corners would you cut? You mention native tech but you seem to ignore the enormous tax in development time/knowledge, etc. So let's say you aim for the best UX. Are you ready to sacrifice business features, or any other aspect? I'm not advocating for crappy UI/UX, but I would rather use an electron app that has all the features I need, than native app that doesn't.
I have thought about this and I'm not sure that Electron really is to blame here.
It makes building an application accessibile, which means that there will be lots of apps built with it, many of which won't be any good.
Just like many native apps will also be horrible in terms of UX.
Good apps are good. And I believe that it's entire possible to build an amazing app with Electron.
Although not everyone might agree, IMO VSCode is a great example of that.
I fully agree with this. Electron is hated here as if it was the source of all evil. When Electron came (and node-webkit as well), there were very limited options to create fully cross-platform apps easily. I tried multiple ways (including Qt) but it was very cumbersome and slow to progress. With Electron not only was I able to create a usefull app quickly, I could reuse almost all the code on web. Ok, it takes space and consume memory inefficiently, but thanks to Electron a lot of useful apps exist that otherwise wouldn't or would be much worse. Today Tauri or something else might be better choice, but hating electron seems really out of place.
I dunk on electron, but it’s a love-hate sort of thing. There’s some great apps out there due to electron. VSCode is great. This http server is well done and looks handy!
Personally though I’m just greedy. I want the best of QT and Electron. Figuro is my attempt at realizing that. ;)
Interesting links, though it seems more due to SwiftUI than anything. SwiftUI still seems rough compared to good ole Cocoa. I also remember when Electron apps ate 100% CPU due to blinking text cursors.
For what’s its worth my Figuro library does pretty well for live updating text and scrolling! And I haven’t even optimized layouts yet, it currently re-layouts the entire tree for every frame.
I have tried Flutter and liked it for mobile development. Maybe I should give it a shot for desktop.
Though I believe those that dislike Electron and the likes for not being native would also have a bone to pick with Flutter.
Even the full-featured TLS/HTTPS forward proxy I use, linked with bloated OpenSSL, is still less than 10MB static binary: 8.7MB. When linked to WolfSSL it's only 4.6MB static binary. The proxy can serve small, static HTML pages, preloading them into memory at startup.
lighttpd is awesome for a quick, local server on Ubuntu. One command installs it. You tell the firewall to allow it. Then, just move your files into the directory. Use a CDN, like BunnyCDN, for HTTPS and caching.
It's not only easy: it runs (or ran) huge sites in production.
Sorta! It didn’t start out that way but I’ve been building more from HTML overtime but keeping it fast and lightweight. I’ve cherry picked a subset of HTML like CSS grids which add a lot of layout power without tons of normal HTML hacks.
I want to try adding a JavaScript runtime with simple DOM built on Figuro nodes instead. But there’s some issues with Nim’s memory management and QuickJs.
Am I reading this wrong or does this almost open up any server bound to localhost to the outside?
I think proxy_pass will forward traffic even when the root and try_files directives fail because the junction/symlink don't exist? And "listen 80" binds on all interfaces doesn't it, not just on localhost?
Is this clever? Sure. But this is also the thing you forget about in 6 months and then when you install any app that has a localhost web management interface (like syncthing) you've accidentally exposed your entire computer including your ssh keys to the internet.
Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.
I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).
That's why I switched to Caddy for most of my needs. I create one Caddy server template, and then instantiate it as a new host with one line per server.
This looks nice with a friendly UI. I've been very happy with Caddy[1], but this seems like something I might recommend to someone that is new to the web environment.
> native UI libraries need to step up their game in terms of approachability.
Gnome does this, you can develop apps in Typescript.
But, they started to migrate some of their own apps to Typescript and immediately received backlash from the community [0]. Although granted, the Phoronix forums can be quite toxic.
My observation is that there is just a big disconnect between younger devs who just want to get the job done, and the more old-school community that care about resource efficients. Both are good intentions that I can understand, but they clash badly. This unfortunately hinders progress on this point.
I agree that this is, at least often, a case of where your roots lie.
Whats most shocking to me is that the likes of Apple and Mircosoft don't seem to be interested in/capable of building an actually good framework.
I feel like Microsoft tried with .NET Maui, but that really isn't a viable choice if you go by developer sentiment.
Typescript is a Microsoft project, so they did build an actually good framework. The swift work that Apple’s doing is pretty cool, though I haven’t used it in anger.
I come from an async/lock free C++ then Rust background, but am using typescript quite a bit these days. Rust is data race free because of the borrow checker. Swift async actors are too, by construction (similar to other actor based frameworks like Orleans). Typescript is trivially data race free (only one thread). Very few other popular languages provide that level of safety these days. Golang certainly does not.
I was benchmarking some single-treaded WASM rust code, and couldn’t figure out why half the runs were 2x slower than the other. It turns out I was accidentally running native half the time, so those runs were faster. I’m shocked the single core performance difference is so low.
Anyway, as bad a javascript used to be, typescript is actually a nice language with a decent ecosystem Like Rust and C++, its type system is a joy to work with, and greatly improves productivity vs. languages like Java, C#, etc.
It is more a side effect of JavaScript bootcamp programming wihtout learning anything else.
I have been coding since 1986, nowadays most of the UIs I get paid to work on are Web based, yet when I want to have fun in side projects I only code for native UIs, if a GUI is needed.
Want to code like VB and Delphi? Plenty of options available, and yes they do scalable layouts, just like they used to do already back in the 1990's for anyone that actually bothered to read the programming manuals.
Yes, I've dabbled in gtk, wxWidgets and several other systems. All of them are meh.
The big player these days seems to be web-based (Electron and friends), though the JVM stack with a native theme for Win/Mac is certainly usable in an environment where you can rely on Java being around.
I think the best option would be some kind of cross-application client-side HTML etc. renderer that apps could use for their user interaction. We could call it a "browser". That avoids the problem of 10 copies of the whole electron stack for 10 apps.
Years ago, Microsoft had their own version of this called HTA (HTml Application) where you could delegate UI to the built-in browser (IE) and get native-looking controls. Something like that but cross-platfom would be nice, especially as one motivation for this project is that Chrome apps are no longer supported so "Web Server for Chrome" is going away. So the "like electron but most of the overhead is handled by Chrome" option is actively being discontinued.
The real problem is that frontend work with anything else is such a pain in the ass.
You want to write separate versions for MacOS, Linux, and Windows Visual .NET#++ and maintain 3 separate source trees in 3 languages and sync all their features and deal with every bug 3 times?
It does. It also includes a dozen other things beyond what that one liner would do. Keep in mind, if it fits with what you're trying to test/how you're trying to develop, just doing things on http://localhost will be treated as a secure origin in most browsers these days.
There does seem to be a weird limitation that you can't enable both HTTP and HTTPS on the same port for some reason. That should be easy enough to code a fix for though.
Its the same transport (TCP assuming something like HTTP 1.1) and trying to mix HTTP and HTTPS seems like a difficult thing to do correctly and securely.
NGINX detects attempts to use http for server blocks configured to handle https traffic and returns an unencrypted http error: "400 The plain HTTP request was sent to HTTPS port".
Doing anything other than disconnecting or returning an error seems like a bad idea though.
Theoretically it would be feasible with something like STARTTLS that allows to upgrade a connection (part of SMTP and maybe IMAP) but browsers do not support this as it is not part of standard HTTP.
It actually is part of standard HTTP [0], just not part of commonly implemented HTTP.
The basic difference between SMTP and HTTP in this context is that email addresses do not contain enough information for the client to know whether it should be expecting encrypted transport or not (hence MTA-STS and SMTP/DANE [1]), so you need to negotiate it with STARTTLS or the like, whereas the https URL scheme tells the client to expect TLS, so there is no need to negotiate, you can just start in with the TLS ClientHello.
In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity. We use this trick to multiplex DTLS/SRTP/STUN and it's somewhat tricky to get right [2] and places limitations on what code points you can assign later. If you wanted to port multiplex, it would be better to do something like HTTP Upgrade, but at this point port 443 is so entrenched, that it's hard to see people changing.
> In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity.
Exactly my original point. If you really understand the protocols, there is probably zero ambiguity (I'm assuming here). But with essentially nothing to gain from supporting this, its obvious to me that any minor risk outweighs the (lack of) benefits.
You can in fact run http, https (and ssh and many others) on the same port with SSLH (its in debian repos) SSLH will forward incoming connections based on the protocol detected in the initial packets. Probes for HTTP, TLS/SSL (including SNI and ALPN), SSH, OpenVPN, tinc, XMPP, SOCKS5, are implemented, and any other protocol that can be tested using a regular expression, can be recognised
how is it more of a security issue than exposing the same services on other ports? Seems to me it’s actually better dont-call-it-security-through-obscurity?
I think what I had seen before was replacing the http variant of the "bad request" page with a redirect to the HTTPS base URL something akin to https://serverfault.com/a/1063031. Looking at it now this is probably more "hacky" than it'd be worth and, as you note, probably comes with some security risks (though for a local development app like this maybe that's acceptable just as using plain HTTP otherwise is), so it does sense that's not an included feature after all.
In general the way that works is user navigates to http://contoso.com which implicitly uses port 80. Contoso server/cdn listening on port 80 redirects them through whatever means to https://contoso.com which implicitly uses 443.
I don't see value on both being on the same port. Why would I ever want to support this when the http: or https: essentially defines the default port.
Now ofcourse someone could go to http://contoso.com:443, but WHY would they do this. Again, failing to see a reason for this.
The "why/value" is usually in clearly handling accidents in hardcoding connection info, particularly for local API/webdev environments where you might pass connection information as an object/list of parameters rather than normal user focused browser URL bar entry. The upside is a connection error can be a bit more opaque than an explicit 400 or 302 saying what happened or where to go instead. That's the entire reason webservers tend to respond with an HTTP 400 in such scenarios in the first place.
Like I said though, once I remembered this was more a "hacky" type solution to give an error than built-in protocol upgrade functionality I'm not so sure the small amount of juice would actually be worth the relatively complicated squeeze for such a tool anymore.
I use voidtools Everything on windows for instant file lookup. It has an Http server built in. Whenever browser complains about a feature only available via webserver url, not local file, it comes handy. Open everything webserver, enter name of the file and click.
Tailscale does this, you can serve a port on your Tailnet, or you can serve a directory of files, or you can expose it to the internet. Comes with HTTPS. It's pretty neat.
It looks nice and friendly, but for developers I can recommend exploring caddy[1] or nginx[2]. It's a useful technology to have worked with, even if they're ultimately only used for proxying analytics.
Second that, I don't really see reason not to run proper web server actually especially if one does web development and would use it for multiple projects anyway.
I often have the requirement, during development cycles, to bring up a static Webserver. After trying several options I always happily come back to the PHP built-in Webserver:
php -S localhost:8080
Many helpful options are available, just check the docs...
Seems like most of the people in these comments have missed the point, as while I also lament the use of Electron, pasting one-liner scripts does not obviate the usefulness of this project. Clearly the point of this project is not just about setting up a simple webserver, but to provide a quick and easy gui to configure that webserver, and there's a fair amount that it allows you to configure. Your one-liner that does nothing but pipe static files as a response is not that. If that's all you need, great, then this project is not for you and that's okay.
Missed opportunity to actually server the UI via the webserver in first place, as we used to do 25 years ago like with IIS Web management UI in my default browser.
for a webserver with an awesome GUI served through the web i recommend roxen. it is not a webframework requiring you to write code to serve content, but it serves static files out of the box and let's you add dynamic content on top of that. and it can outperform apache and other webservers depending on your workload.
to anyone concerned about needing to learn a new language, unless you want to build very custom dynamic sites you don't. to conveniently serve static files you won't ever have to touch the code. just like you wouldn't do that with apache or nginx.
and even if you do want something dynamic, a lot of dynamic features can be had by embedding custom tags in html. still no code required.
if you have concerns about pike as a language for the server implementation itself, pike is a very performant language with a long history being used in high profile sites and services. both pike and roxen go back to the early 90s.
and if you do want to create custom features and need help you can hire me. i am looking for work (pike,js,ts,python,php,ruby,go,... ;-)
Lol. I used to work at an IIS shop. 100% of engineering thought using it was a bad idea. Any one of us could configure apache correctly in a few minutes, but we ultimately had to hire a full time Microsoft guru to keep IIS (and the rest of the ecosystem that implies) running. He was a pleasure to work with, but it wasn’t clear why his job existed.
Wow, this is a "full circle" moment. I can distinctly remember installing my first WAMP (Windows, Apache, MySQL, PHP) stack back in ~03 when I was learning to program. It was all easy point-and-click installers. I think I may have had to edit a config file to enable PHP, but that was it.
I wrote the original version of this "simple web server" app (https://chromewebstore.google.com/detail/web-server-for-chro...) because the built-in python http server is a bit buggy. It would hang on some connections when loading a webpage with a lot of small assets.
I was surprised how many people found it useful. More so when Chrome web apps were supported on other platforms (mac/linux/windows).
127.0.0.1 means "self" in IP. Presumably that means that it if you browse to your IP address from your computer it will work, but from your phone it will not.
I usually do the opposite - 0.0.0.0 - which allows connections from any device.
Cool project! Setting up a quick local HTML server can be annoying.
Alas it looks like it's web/electron based. :/ Downloading it and yep, 443.8 MB on MacOS. The Linux one is a bit better at 183.3 MB.
Electron really should get a kickback from disk manufacturers! ;)
Shameless plug, I've been working on a HTML inspired lightweight UI toolkit because I'm convinced we can make these sort of apps and they should only be ~10-20 MB [1] with nice syntax, animation, theming, etc. I'm finally making a suite of widgets. Maybe I can make a basic clone of this! Bet I could get in < 10MB. :)
1: https://github.com/elcritch/figuro
My usual go-to for a quick static server is:
python -m http.server
But variations exist for a lot of languages. Php has one built-in too
I use python for serving my local static site development with this custom little bash wrapper script I wrote:
From the people who brought you Useless Use of Cat, here's our newest innovation: Useless Use of Bash!
That whole script could just be the last line! Maybe you could add defaults like
Fair, but don't need to be snooty about it. :-)
Genuinely curious about what the full script would look like in consideration of your feedback.
Almost! That will read the variable whose name is is the script argument. Also the directory argument needs a flag on my setup. It should be:
Yep, you're right.
Don't forget bash.
For anyone baffled by this: This works because HTTP/0.9 (just called "HTTP" at the time) worked in an extremely simple way, and browsers mostly retained compatibility for this.
HTTP/0.9 web browser sends:
Netcat sends: Nowadays a browser will send `GET / HTTP/1.1` and then a bunch of headers, which a true HTTP/0.9 server may be able to filter out and ignore, but of course this script will just send the document and the browser will still assume it's a legacy server.I was about to down-vote you, but that would be unfair, as this has roughly the typical level of correctness of most bash scripts.
It's absolutely (almost) correct! HTTP/0.9 does not require you to send back a status code or any headers. Some modern web servers even recognise a lone "GET /" to mean HTTP/0.9 and will respond accordingly.
This is exactly my point - it successfully accomplishes a very specific task, in a way that is fragile and context dependent, and completely fails to handle any errors or edge cases, or reckon with any complexity at all.
This is hilarious
There are some nice compilations of those, like
https://gist.github.com/willurd/5720255
Python is my go-to method too, altough the config file approach from this project looks exciting.
(I'm sure if I dug in the http.server documentation I could find all those options too.)
Random tangent: It appears that most of Electron's funding is actually the German government.
The Sovereign Tech Agency, under the Federal Ministry for Economic Affairs and Climate Action, fund OpenJS, specifically for improving the state of open source in JavaScript.
Electron is now part of OpenJS.
Is wasting cycles considered "climate action" now?
I wonder if I can get an LLM to act as a webserver, and connect it to a TCP port...
Yes, just say "format your responses as a web server" and then connect it to netcat.
I thought everyone tried this? It speaks HTML, and tossing in the few things the spec requires is peanuts to an llm.
It's an action alright. The direction, however...
> Alas it looks like it's web/electron based.
For me this contradicts the claim of being simple. As opposed to this:
I use miniserve (< 5mb) just because I can never remember the python incantation. https://github.com/svenstaro/miniserve
I like and understand the paradigm of using web technologies to build GUI apps. I have yet to find any desktop framework that even comes close to the DX of using web tech.
I recently explored both Tauri [1] and Wails [2]. Especially Wails is lots of fun. The simplicity of go paired with the fast iteration I get from using React just feels awesome. And the final application is ~10 MB in size!
[1] https://v2.tauri.app/ [2] https://wails.io/
OTOH, I have yet to see any web framework that comes even close to the UX of native tech.
It's almost as if web crap is optimizing for developer experience at the expense of users.
In SW development you need to make compromises. You can not have all of: quality, performance, memory/cpu/disk efficiency, security, development speed, low effort, cross-platform app, accessibility, all the business features, etc. Which corners would you cut? You mention native tech but you seem to ignore the enormous tax in development time/knowledge, etc. So let's say you aim for the best UX. Are you ready to sacrifice business features, or any other aspect? I'm not advocating for crappy UI/UX, but I would rather use an electron app that has all the features I need, than native app that doesn't.
I have thought about this and I'm not sure that Electron really is to blame here. It makes building an application accessibile, which means that there will be lots of apps built with it, many of which won't be any good.
Just like many native apps will also be horrible in terms of UX. Good apps are good. And I believe that it's entire possible to build an amazing app with Electron.
Although not everyone might agree, IMO VSCode is a great example of that.
I fully agree with this. Electron is hated here as if it was the source of all evil. When Electron came (and node-webkit as well), there were very limited options to create fully cross-platform apps easily. I tried multiple ways (including Qt) but it was very cumbersome and slow to progress. With Electron not only was I able to create a usefull app quickly, I could reuse almost all the code on web. Ok, it takes space and consume memory inefficiently, but thanks to Electron a lot of useful apps exist that otherwise wouldn't or would be much worse. Today Tauri or something else might be better choice, but hating electron seems really out of place.
I dunk on electron, but it’s a love-hate sort of thing. There’s some great apps out there due to electron. VSCode is great. This http server is well done and looks handy!
Personally though I’m just greedy. I want the best of QT and Electron. Figuro is my attempt at realizing that. ;)
That's nice. Does it use the native components, or are you rendering that all lower level?
You say that, and I hear the arguments, but numbers don't lie.
https://x.com/daniel_nguyenx/status/1734495508746702936
Further discussion can be found here: https://www.macstories.net/linked/is-electron-really-that-ba... and in the linked video.
You say at the expense of users. But when even Apple does't go all native, it's telling.
Interesting links, though it seems more due to SwiftUI than anything. SwiftUI still seems rough compared to good ole Cocoa. I also remember when Electron apps ate 100% CPU due to blinking text cursors.
For what’s its worth my Figuro library does pretty well for live updating text and scrolling! And I haven’t even optimized layouts yet, it currently re-layouts the entire tree for every frame.
Could you list out your native GUI stack, for Windows, OSX and Linux?
Linux: still can't decide between Qt and Gtk.
macOS: too busy rewriting it in SwiftUI before Apple pulls the plug on Obj-C.
Windows: https://old.reddit.com/r/Windows10/comments/o1x183/
Did you try Flutter? That one worked for me at least as well as using the web approach. Definitely from a DX side.
I have tried Flutter and liked it for mobile development. Maybe I should give it a shot for desktop. Though I believe those that dislike Electron and the likes for not being native would also have a bone to pick with Flutter.
With flutter you give up all the standard web components, accessibility defaults etc. If you don't mind, then it's definitely an option.
"Bet I could get in < 10MB."
I use one that is 99K static binary.
Even the full-featured TLS/HTTPS forward proxy I use, linked with bloated OpenSSL, is still less than 10MB static binary: 8.7MB. When linked to WolfSSL it's only 4.6MB static binary. The proxy can serve small, static HTML pages, preloading them into memory at startup.
For a GUI app though?
WinRAR is 3.7 MB.
Funny the bulk of the server is vestigial client code.
Figures, though I suspect that code only makes up a fraction of the binary size. Assuming it’s electron most of that bulk is chromium bits.
lighttpd is awesome for a quick, local server on Ubuntu. One command installs it. You tell the firewall to allow it. Then, just move your files into the directory. Use a CDN, like BunnyCDN, for HTTPS and caching.
It's not only easy: it runs (or ran) huge sites in production.
Figuro is like Sciter but for Nim?
Sorta! It didn’t start out that way but I’ve been building more from HTML overtime but keeping it fast and lightweight. I’ve cherry picked a subset of HTML like CSS grids which add a lot of layout power without tons of normal HTML hacks.
I want to try adding a JavaScript runtime with simple DOM built on Figuro nodes instead. But there’s some issues with Nim’s memory management and QuickJs.
Because projects like these were missing back then, I got creative with nginx and do not need any config changes to serve new projects:
Configuration is done via the domain name like projectx-20205-127-0-0-1.nip.io which specifies the directory and port.All you need to do is create a junction (mklink /J domain folder_path). This maps the domain to a folder.
Am I reading this wrong or does this almost open up any server bound to localhost to the outside?
I think proxy_pass will forward traffic even when the root and try_files directives fail because the junction/symlink don't exist? And "listen 80" binds on all interfaces doesn't it, not just on localhost?
Is this clever? Sure. But this is also the thing you forget about in 6 months and then when you install any app that has a localhost web management interface (like syncthing) you've accidentally exposed your entire computer including your ssh keys to the internet.
Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.
Will just any request even pass the host matching?
I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).That's why I switched to Caddy for most of my needs. I create one Caddy server template, and then instantiate it as a new host with one line per server.
This looks nice with a friendly UI. I've been very happy with Caddy[1], but this seems like something I might recommend to someone that is new to the web environment.
[1] https://caddyserver.com/docs/quick-starts/static-files
Or someone who has chromeOS not in dev mode.
I use SimpleWebServer there since 6 years or something and it just works.
Caddy works just fine in ChromeOS' OOTB Linux VM. Crostini has chipped away at most dev mode use-cases.
Electron? It's unfortunate how bad programming has become. Over 100MB for a program that could be written with native code under 1MB.
I think the popularity of Electron is merely a testament to the fact that native UI libraries need to step up their game in terms of approachability.
> native UI libraries need to step up their game in terms of approachability.
Gnome does this, you can develop apps in Typescript.
But, they started to migrate some of their own apps to Typescript and immediately received backlash from the community [0]. Although granted, the Phoronix forums can be quite toxic.
My observation is that there is just a big disconnect between younger devs who just want to get the job done, and the more old-school community that care about resource efficients. Both are good intentions that I can understand, but they clash badly. This unfortunately hinders progress on this point.
[0] https://www.phoronix.com/forums/forum/phoronix/latest-phoron...
Cool, I didn't know that.
I agree that this is, at least often, a case of where your roots lie. Whats most shocking to me is that the likes of Apple and Mircosoft don't seem to be interested in/capable of building an actually good framework.
I feel like Microsoft tried with .NET Maui, but that really isn't a viable choice if you go by developer sentiment.
Typescript is a Microsoft project, so they did build an actually good framework. The swift work that Apple’s doing is pretty cool, though I haven’t used it in anger.
I come from an async/lock free C++ then Rust background, but am using typescript quite a bit these days. Rust is data race free because of the borrow checker. Swift async actors are too, by construction (similar to other actor based frameworks like Orleans). Typescript is trivially data race free (only one thread). Very few other popular languages provide that level of safety these days. Golang certainly does not.
I was benchmarking some single-treaded WASM rust code, and couldn’t figure out why half the runs were 2x slower than the other. It turns out I was accidentally running native half the time, so those runs were faster. I’m shocked the single core performance difference is so low.
Anyway, as bad a javascript used to be, typescript is actually a nice language with a decent ecosystem Like Rust and C++, its type system is a joy to work with, and greatly improves productivity vs. languages like Java, C#, etc.
It is more a side effect of JavaScript bootcamp programming wihtout learning anything else.
I have been coding since 1986, nowadays most of the UIs I get paid to work on are Web based, yet when I want to have fun in side projects I only code for native UIs, if a GUI is needed.
Want to code like VB and Delphi? Plenty of options available, and yes they do scalable layouts, just like they used to do already back in the 1990's for anyone that actually bothered to read the programming manuals.
Yes, I've dabbled in gtk, wxWidgets and several other systems. All of them are meh.
The big player these days seems to be web-based (Electron and friends), though the JVM stack with a native theme for Win/Mac is certainly usable in an environment where you can rely on Java being around.
I think the best option would be some kind of cross-application client-side HTML etc. renderer that apps could use for their user interaction. We could call it a "browser". That avoids the problem of 10 copies of the whole electron stack for 10 apps.
Years ago, Microsoft had their own version of this called HTA (HTml Application) where you could delegate UI to the built-in browser (IE) and get native-looking controls. Something like that but cross-platfom would be nice, especially as one motivation for this project is that Chrome apps are no longer supported so "Web Server for Chrome" is going away. So the "like electron but most of the overhead is handled by Chrome" option is actively being discontinued.
> I think the best option would be some kind of cross-application client-side HTML etc...
I think Tauri is trying to go for this - a web app without the whole chromium bundled, but using a native web view
Funny to ship a web browser for a webserver.
I suppose that's why people run multi socket machines for "home labs".
The real problem is that frontend work with anything else is such a pain in the ass.
You want to write separate versions for MacOS, Linux, and Windows Visual .NET#++ and maintain 3 separate source trees in 3 languages and sync all their features and deal with every bug 3 times?
https://gist.github.com/willurd/5720255
a more comprehensive list of one liner webservers is here: https://github.com/imgarylai/awesome-webservers
I was going to mention busybox httpd and php -S, but this list has them already :).
Should support self-signed HTTPs ideally. IIRC there a quite a few (some?) web features that do not function unless the page is served over HTTPs.
That would certainly make this more useful than `python3 -m http.server`.
It does. It also includes a dozen other things beyond what that one liner would do. Keep in mind, if it fits with what you're trying to test/how you're trying to develop, just doing things on http://localhost will be treated as a secure origin in most browsers these days.
There does seem to be a weird limitation that you can't enable both HTTP and HTTPS on the same port for some reason. That should be easy enough to code a fix for though.
> HTTP and HTTPS on the same port
Do any real web servers support this?
Its the same transport (TCP assuming something like HTTP 1.1) and trying to mix HTTP and HTTPS seems like a difficult thing to do correctly and securely.
NGINX detects attempts to use http for server blocks configured to handle https traffic and returns an unencrypted http error: "400 The plain HTTP request was sent to HTTPS port".
Doing anything other than disconnecting or returning an error seems like a bad idea though.
Theoretically it would be feasible with something like STARTTLS that allows to upgrade a connection (part of SMTP and maybe IMAP) but browsers do not support this as it is not part of standard HTTP.
It actually is part of standard HTTP [0], just not part of commonly implemented HTTP.
The basic difference between SMTP and HTTP in this context is that email addresses do not contain enough information for the client to know whether it should be expecting encrypted transport or not (hence MTA-STS and SMTP/DANE [1]), so you need to negotiate it with STARTTLS or the like, whereas the https URL scheme tells the client to expect TLS, so there is no need to negotiate, you can just start in with the TLS ClientHello.
In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity. We use this trick to multiplex DTLS/SRTP/STUN and it's somewhat tricky to get right [2] and places limitations on what code points you can assign later. If you wanted to port multiplex, it would be better to do something like HTTP Upgrade, but at this point port 443 is so entrenched, that it's hard to see people changing.
[0] https://www.rfc-editor.org/rfc/rfc7230#section-6.7. [1] https://datatracker.ietf.org/doc/html/rfc8461 https://datatracker.ietf.org/doc/html/rfc7672 [2] https://datatracker.ietf.org/doc/html/rfc7983
> In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity.
Exactly my original point. If you really understand the protocols, there is probably zero ambiguity (I'm assuming here). But with essentially nothing to gain from supporting this, its obvious to me that any minor risk outweighs the (lack of) benefits.
You can in fact run http, https (and ssh and many others) on the same port with SSLH (its in debian repos) SSLH will forward incoming connections based on the protocol detected in the initial packets. Probes for HTTP, TLS/SSL (including SNI and ALPN), SSH, OpenVPN, tinc, XMPP, SOCKS5, are implemented, and any other protocol that can be tested using a regular expression, can be recognised
https://github.com/yrutschle/sslh
Cool repo, but I stand by this is a major security risk with very little if any benefit.
how is it more of a security issue than exposing the same services on other ports? Seems to me it’s actually better dont-call-it-security-through-obscurity?
I think what I had seen before was replacing the http variant of the "bad request" page with a redirect to the HTTPS base URL something akin to https://serverfault.com/a/1063031. Looking at it now this is probably more "hacky" than it'd be worth and, as you note, probably comes with some security risks (though for a local development app like this maybe that's acceptable just as using plain HTTP otherwise is), so it does sense that's not an included feature after all.
I don't think that really applies here.
In general the way that works is user navigates to http://contoso.com which implicitly uses port 80. Contoso server/cdn listening on port 80 redirects them through whatever means to https://contoso.com which implicitly uses 443.
I don't see value on both being on the same port. Why would I ever want to support this when the http: or https: essentially defines the default port.
Now ofcourse someone could go to http://contoso.com:443, but WHY would they do this. Again, failing to see a reason for this.
I (and the provided link) are not referring to http://example.com:80 to https://example.com:443 type redirects, though those are certainly nice too. They are, indeed, solely about http://example.com:443 to https://example.com:443 type redirects and what those can provide.
The "why/value" is usually in clearly handling accidents in hardcoding connection info, particularly for local API/webdev environments where you might pass connection information as an object/list of parameters rather than normal user focused browser URL bar entry. The upside is a connection error can be a bit more opaque than an explicit 400 or 302 saying what happened or where to go instead. That's the entire reason webservers tend to respond with an HTTP 400 in such scenarios in the first place.
Like I said though, once I remembered this was more a "hacky" type solution to give an error than built-in protocol upgrade functionality I'm not so sure the small amount of juice would actually be worth the relatively complicated squeeze for such a tool anymore.
This particular shortcoming of is why I wrote https://github.com/rhardih/serve back when.
These days just using `caddy` might be easier though.
For those who have Rust, I like miniserve[1]:
[1]: https://github.com/svenstaro/miniserveThanks for posting miniserve <3
I use voidtools Everything on windows for instant file lookup. It has an Http server built in. Whenever browser complains about a feature only available via webserver url, not local file, it comes handy. Open everything webserver, enter name of the file and click.
I've been using Everything forever and never knew about this feature. Thanks!
Tailscale does this, you can serve a port on your Tailnet, or you can serve a directory of files, or you can expose it to the internet. Comes with HTTPS. It's pretty neat.
https://tailscale.com/kb/1312/serve
It looks nice and friendly, but for developers I can recommend exploring caddy[1] or nginx[2]. It's a useful technology to have worked with, even if they're ultimately only used for proxying analytics.
[1] https://caddyserver.com/ [2] https://nginx.org/
Second that, I don't really see reason not to run proper web server actually especially if one does web development and would use it for multiple projects anyway.
You get a whole copy of chromium doing something a simple python -m http.server would do without the 200MB overhead
A default python install is > 200MB thoguh.
For a second there, I read it as static web server [1], which is actually pretty cool itself
1: https://static-web-server.net/
I often have the requirement, during development cycles, to bring up a static Webserver. After trying several options I always happily come back to the PHP built-in Webserver:
php -S localhost:8080
Many helpful options are available, just check the docs...
Bun does this with;
bun index.html
https://x.com/jarredsumner/status/1886073859138580506
npx http-server (keep the "r" at the end, it's more up-to-date than the http-serve package)
https://github.com/http-party/http-server
Seems like most of the people in these comments have missed the point, as while I also lament the use of Electron, pasting one-liner scripts does not obviate the usefulness of this project. Clearly the point of this project is not just about setting up a simple webserver, but to provide a quick and easy gui to configure that webserver, and there's a fair amount that it allows you to configure. Your one-liner that does nothing but pipe static files as a response is not that. If that's all you need, great, then this project is not for you and that's okay.
Missed opportunity to actually server the UI via the webserver in first place, as we used to do 25 years ago like with IIS Web management UI in my default browser.
for a webserver with an awesome GUI served through the web i recommend roxen. it is not a webframework requiring you to write code to serve content, but it serves static files out of the box and let's you add dynamic content on top of that. and it can outperform apache and other webservers depending on your workload.
Written in Pike, interesting.
to anyone concerned about needing to learn a new language, unless you want to build very custom dynamic sites you don't. to conveniently serve static files you won't ever have to touch the code. just like you wouldn't do that with apache or nginx.
and even if you do want something dynamic, a lot of dynamic features can be had by embedding custom tags in html. still no code required.
if you have concerns about pike as a language for the server implementation itself, pike is a very performant language with a long history being used in high profile sites and services. both pike and roxen go back to the early 90s.
and if you do want to create custom features and need help you can hire me. i am looking for work (pike,js,ts,python,php,ruby,go,... ;-)
Lol. I used to work at an IIS shop. 100% of engineering thought using it was a bad idea. Any one of us could configure apache correctly in a few minutes, but we ultimately had to hire a full time Microsoft guru to keep IIS (and the rest of the ecosystem that implies) running. He was a pleasure to work with, but it wasn’t clear why his job existed.
Anyway, Web UI != easy to administer.
Because shipping the browser alongside the server as means to provide a Web UI, it is a much better option. /s
240 MB for a simple web server? You're doing something wrong. And by something, I mean everything.
My goto is https://libraries.io/pypi/beautify-http-server; upload files and everything. I have it running on my Raspberry Pi.
What is libraries.io? Why not link to the official pypi page?
https://pypi.org/project/beautify-http-server/
Sorry I rushed the URL. I was focused on presenting the name of the package. I have no idea what libraries.io is.
You might want to remove the semi-colon
Too late to edit. Here's a better clickable: https://pypi.org/project/beautify-http-server/
... open to the internet, adding an allow rule to the host firewall (at least on linux)
Not sure why i need this on Mac. Apache is already installed — just drop your files in ~/Sites/ or /Library/WebServer/Documents/ .
Wow, this is a "full circle" moment. I can distinctly remember installing my first WAMP (Windows, Apache, MySQL, PHP) stack back in ~03 when I was learning to program. It was all easy point-and-click installers. I think I may have had to edit a config file to enable PHP, but that was it.
Looks cool! If anyone wants a similar thing without a UI, there's a also webfsd [0]. It's mature, feature packed and fast.
[0]: https://github.com/ourway/webfsd
If you globally install the npm module dhost, you can run it from any directory (`dhost`) to start a webserver there.
Python 2 had a similar function (`python -m SimpleHTTPServer`). I know there's a Python 3 equivalent, but I don't have it memorized.
I use serve
https://www.npmjs.com/package/serve
python3 -m http.server
I wrote the original version of this "simple web server" app (https://chromewebstore.google.com/detail/web-server-for-chro...) because the built-in python http server is a bit buggy. It would hang on some connections when loading a webpage with a lot of small assets. I was surprised how many people found it useful. More so when Chrome web apps were supported on other platforms (mac/linux/windows).
I also use this! like I wanted a backup of my device once to my phones when my system was messed up.
and I used this. Though I would prefer a way to keep it downloading from where it left because this method ISNT reliable for 40 gigs transfer
I am also wondering about this comment in the gist which was linked (gist.github.com/willurd/5720255) by olejorgenb which is
Limiting request to a certain interface (eg.: do not allow request from the network)
python -m http.server --bind 127.0.0.1
Like what does that really do? Maybe its also nice, IDK?
127.0.0.1 means "self" in IP. Presumably that means that it if you browse to your IP address from your computer it will work, but from your phone it will not.
I usually do the opposite - 0.0.0.0 - which allows connections from any device.
cd /some/dir && python -m http.server 8080
python3 -m http.server -d /path/to/dir
Don’t forget 8080. http.server binds on port 8000 by default :].
Why is 8080 more likely to be available than 8000?
Don’t worry, it’s a misunderstanding.
most people will have python installed already, but not mini_httpd.
docker pull nginx
I basically do jwebserver.exe, done.
python3 -m http.server 8000