localnet with hostnames/DNS is not compatible with remote dns - if remote dns
is activated we get an ip from the remote dns resolver in the connect() call,
so we don't know whether the destination would match any localnet - except
from the ANY localnet 0.0.0.0 - in which case we would need to do a real DNS
lookup with the remote DNS ip involving both the rdns resolver to get the
original hostname back and then call the native DNS resolver function - for
which there is only getaddrinfo() when we don't want to support the 5
different gethostbyname_r() variants in existence, or using getaddrinfo(),
which in turn requires memory allocation/free() - in other words a huge mess.
we also can't easily check in the resolver whether an ANY-destination localnet
is enabled and the port matches, because the resolver might only resolve the
hostname at this stage, but not the destination port.
addressing #358
gethostbyname() is specified to transform simple numeric ipv4 addresses
into their binary form. since proxy_gethostbyname() was used as a
backend for all resolver functions, somehow we assumed the check for
an ipv4 was done from another site, however this didn't hold true when
the caller used gethostbyname() directly.
fixes#347
the buffer buff was only used for the initial handshake packets,
which in all supported protocols are usual less than a 100 bytes,
with user/pass and dns name at maximum we'd require 768 bytes,
which still leaves us a formidable 256 bytes for the rest of
the packet.
this fixes a segfault with microsocks which on musl uses tiny
thread stack sizes of 8KB.
careful analysis has shown that the buffer is only ever used for
at most a single hostname, so 256 bytes are sufficient.
the huge 8KB buffer caused stack overflow when used with microsocks,
which defaults to tiny thread stacks of 8KB with musl libc.
since many users complain about issues with modern, ultracomplex
clusterfuck software such as chromium, nodejs, etc, i've reconsidered
one of my original ideas how to implement remote dns lookup support.
instead of having a background thread serving requests via a pipe,
the user manually starts a background daemon process before running
proxychains, and the two processes then communicate via UDP.
this requires much less hacks (like hooking of close() to prevent
pipes from getting closed) and doesn't need to call any async-signal
unsafe code like malloc(). this means it should be much more compatible
than the previous method, however it's not as practical and slightly
slower.
it's recommended that the proxychains4-daemon runs on localhost, and
if you use proxychains-ng a lot you might want to set ip up as a service
that starts on boot. a single proxychains4-daemon should theoretically
be able to serve many parallel proxychains4 instances, but this has not
yet been tested so far. it's also possible to run the daemon on other
computers, even over internet, but currently there is no error-checking/
timeout code at all; that means the UDP connection needs to be very
stable.
the library code used for the daemon sources are from my projects
libulz[0] and htab[1], and the server code is loosely based on
microsocks[2]. their licenses are all compatible with the GPL.
if not otherwise mentioned, they're released for this purpose under
the standard proxychains-ng license (see COPYING).
[0]: https://github.com/rofl0r/libulz
[1]: https://github.com/rofl0r/htab
[2]: https://github.com/rofl0r/microsocks
using strstr() is a very error-prone way for config parsing.
for example if "proxy_dns" is being tested for the line "proxy_dns_old",
it would return true.
we fix this by removing leading and trailing whitespace from the line
to parse and use strcmp/strncmp() instead.
the if(1) has been inserted so we can keep the same indentation level
and not spam the commit with whitespace changes.
some lamer on IRC by the name of annoner/R3M0RS3/penis was complaining
that 3.1 is a lot better than proxychains-ng, because it happens to
work with the browser he's interested in.
since this wasn't the first time this is requested, let's give this
those lamers what they want: lame code!
since we caved in to demands that it should be possible to allow
hostnames in the proxy list section, we now got to deal with the
fallout. the code was calling at_get... assuming that the allocator
thread is always used.
simplify code by adding a new at_msg struct that contains both
the message header and the message body.
this allow e.g. atomic writes of the whole message instead of doing
it in 2 steps. unfortunately reads still have to be done in at least
2 steps, since there are no atomicity guarantees[0].
additionally the message header shrunk from 8 to 4 bytes.
[0]: https://stackoverflow.com/questions/14661708/under-what-conditions-are-pipe-reads-atomic
In proxychains, we create pipes and use them internally.
If exec() is called by the program we run, the pipes opened
previously are never closed, causing a file descriptor leak
may eventually crash the program.
This commit calls fcntl() to set FD_CLOEXEC flags on pipes.
AFAIK there's no race condition on pipe creation, but we still
prefer to call the newer pipe2() with O_CLOEXEC if it's supported
by the system, due to its advantage of atomic operation, which
prevents potential race conditions in the future.
Signed-off-by: Tom Li <tomli@tomli.me>
this should make it work automatically on any new platform without
having to add yet another ifdef hack.
should fix issues with android (#265).
additionally the code for the systems lacking GNU-compatible getservbyname_r()
is now guarded with a mutex, which prevents possible races, therefore
resolving the ancient "TODO" item.