since many users complain about issues with modern, ultracomplex
clusterfuck software such as chromium, nodejs, etc, i've reconsidered
one of my original ideas how to implement remote dns lookup support.
instead of having a background thread serving requests via a pipe,
the user manually starts a background daemon process before running
proxychains, and the two processes then communicate via UDP.
this requires much less hacks (like hooking of close() to prevent
pipes from getting closed) and doesn't need to call any async-signal
unsafe code like malloc(). this means it should be much more compatible
than the previous method, however it's not as practical and slightly
slower.
it's recommended that the proxychains4-daemon runs on localhost, and
if you use proxychains-ng a lot you might want to set ip up as a service
that starts on boot. a single proxychains4-daemon should theoretically
be able to serve many parallel proxychains4 instances, but this has not
yet been tested so far. it's also possible to run the daemon on other
computers, even over internet, but currently there is no error-checking/
timeout code at all; that means the UDP connection needs to be very
stable.
the library code used for the daemon sources are from my projects
libulz[0] and htab[1], and the server code is loosely based on
microsocks[2]. their licenses are all compatible with the GPL.
if not otherwise mentioned, they're released for this purpose under
the standard proxychains-ng license (see COPYING).
[0]: https://github.com/rofl0r/libulz
[1]: https://github.com/rofl0r/htab
[2]: https://github.com/rofl0r/microsocks
this should make it work automatically on any new platform without
having to add yet another ifdef hack.
should fix issues with android (#265).
additionally the code for the systems lacking GNU-compatible getservbyname_r()
is now guarded with a mutex, which prevents possible races, therefore
resolving the ancient "TODO" item.
calling uname in a configure script is entirely bogus, as it will return
wrong results in crosscompilation scenarios. the only sensible way to
detect the target platform's peculiarities is to test the preprocessor
for macros defining the target.
it turns out that those macros are not portable at all. rather than
adding workarounds to make it work for every single platform, just
use plain s6_addr instead.
unlike one would expect, setting `CC?=gcc -m32` in config.mak did not actually
lead to `gcc -m32` being used as compiler when running make, even though CC
was not declared anywhere else.
it appears as if the CC variable is implicitly defined by GNU make, so using
the ?= assignment (meaning "assign only if not already assigned") did not have
an effect.
when this configure script and Makefile here were written, they were modeled
after the interface provided by GNU autoconf (so there are no surprises for the
user). the assumption was that environment variables passed during configure
are usually stored and used for the compile, but can be overridden when running
make by exporting the variables again.
in reality though they can not be overridden by environment when running make,
as tests showed.
because of that, the other user-supplied variables will now be hard-assigned as
well.
closes#152
commit message by @rofl0r.
we temporarily store all buildsystem-set conditionals into
OUR_CPPFLAGS and write it into config.mak as an addition to eventually
user-supplied CPPFLAGS. this should prevent crucial things we set from
being overwritten by a user that has CPPFLAGS exported.
fixes#142
apparently mktemp on OSX 10.9.5 requires a parameter.
instead of playing whack-a-mole with apple we now use the portable
code from musl's configure script which should work everywhere.
adresses #142
this should fix the build of the recently added ipv6 code on MacOS X,
OpenBSD and eventually FreeBSD.
closes#83closes#85
thanks to @cam13 and @vonnyfly for reporting/testing.