| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Add test
Add body to reqenv
commit_hash:222fa588fa7d7a7f65869bd997d32610e78b7378
|
|
|
|
| |
commit_hash:cfe9165ddad260bd29afd422967a26259367dcc7
|
|
|
|
| |
commit_hash:4721e3d84bd7a730a2fc5be4d0e42da14ef16c40
|
|
|
|
| |
commit_hash:909fa7aadbf673448dbc709b19d2088963b40404
|
|
|
|
| |
commit_hash:fa0dfc03d76b5e40181e589078cdfff0c13ae51d
|
|
|
|
|
| |
Выкашиваем зависимость от `libidn` из универсального фетчера (по запросу `ya-bin`)
commit_hash:525ba52d2ea4b45a15e726f7d9c73081fa2812ef
|
|
|
|
|
|
|
| |
Если в редиректе приходит url со схемой **http (без s !!!)**, то порт выставляется ++443++
Этот PR фиксит это поведение
commit_hash:ef496e4f1cb08f3ba3b9b0f89a34f077cce38e00
|
|
|
|
|
| |
add constant for HTTP 434 code
commit_hash:bb04bc4efd36dc9989de7535b40c968c69b27472
|
|
|
|
| |
commit_hash:9f66fdc1ffe8653fba7144bff4dbee4b92723b50
|
| |
|
|
|
|
| |
e2f99a0432865120bc478a3fb91956424c374445
|
| |
|
| |
|
|
|
|
|
|
|
| |
socket by default
анонс https://at.yandex-team.ru/clubs/arcadia/30286
77f0f6dfa6c3bc8c2a8428ecf91cd76b22bdb60e
|
|
|
|
| |
1b80d64b6a03772edc52f2331a860ff0b5621898
|
|
|
|
| |
3a95ba7ea18b67eb6bd8d04631814456f4881138
|
|
|
|
| |
7783a07e40a1942583eb0470e1a4b58b3369951e
|
|
|
|
| |
05bf28fe4eb31cec383104614cfd06d51d5c6a72
|
|
|
|
| |
351519c01d45a22beceb491029a8f516619673a0
|
|
|
|
| |
ddffb1ebbc56036902fc8b93aac08ff45a8ef547
|
|
|
|
| |
042561a12173d74b7f904c5e5b4c2a89c148015f
|
|
|
|
| |
126c0cfa83378569f6fcef85a64a147009117de2
|
|
|
|
| |
Relates: https://st.yandex-team.ru/, https://st.yandex-team.ru/
|
|
|
|
|
| |
* Library import 8
* Add contrib/libs/cxxsupp/libcxx/include/__verbose_abort
|
| |
|
| |
|
|
|
|
| |
Update tools: yexport, os-yexport
|
| |
|
|
|
|
|
| |
This reverts commit 16e792be75335b09a4f9f254e3972030af83b1ad, reversing
changes made to 3790f3d771d1a65ed6c0d05f3e0d79ff13308142.
|
| |
|
| |
|
|
|
|
| |
https://clubs.at.yandex-team.ru/arcadia/29404
|
| |
|
|
|
|
| |
заголовков
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With single poller thread on incoming connections, every OS scheduler latency on this thread wakeup directly affects requests timings. With oneshot poll events, we can poll on the same poller from many threads, and, if one thread has stalled for some reason, some other will take it's work on the next incoming event. So:
- make vector of listener threads instead of single one;
- add nListenerThreads option;
- stop request queues and listening sockets from the last finished thread;
- check incoming options and set OneShotPoll if needed.
There is a problem around removing connections on MaxConnections limit or ExpirationTimeout. There is no simple way to safely remove items from epoll (https://lwn.net/Articles/520012/) if it has raw pointers in event data. Try to handle it via postponed deletion of connection objects, wait until all listener threads are ready to reenter poller wait and there are no threads where deleted object can be used:
- close socket immediately after remove from poller, but instead of immediate TClientConnection destruction, put it to "pending delete" list;
- add cleanup state with thread mask, each bit stating that corresponding thread should reenter poller;
- call Cleanup routine before each poller wait, it will switch to 0 current thread's bit for each pending connection;
- when thread mask becomes all zero, really delete the connection;
- force there is a timeout for poller wait, ensure that all threads do reenter;
- add more configurations for some tests.
There is no significant changes or overhead for standard case with single listener thread, cleanup and pending deletion are just skipped. Also there is no overhead for common case where removing connections is rare. Here is the same review with nListenerThreads = 4 by default https://a.yandex-team.ru/review/4413226.
|
| |
|
| |
|
|
|
|
|
| |
- apply one shot poll (under option) for listening socket too;
- some code rearrangements.
|
|
|
|
|
|
|
|
| |
- move out listener and thread pools initialization stage from listener thread (no actual changes, this part of code was awaited via ListenStartEvent anyway) ;
- remove ListenerStartEvent and ListenerRunningOK flag, no use now;
- make Reqs list of listening sockets class member;
- leave Reqs list destruction in listener thread (it should happen just after Shutdown but after polling loop stopped to prevent races);
- ut for server startup fail.
|
|
|
|
|
|
|
|
| |
With WaitReadOneShot:
- there is no need to do Unwait on connection activation, one less syscall per request;
- this allows to make several listener threads over one epoll poller.
Turn option on for search daemons (check it turned on by default here https://a.yandex-team.ru/review/4372795/details).
|
| |
|
| |
|
| |
|
|
|
|
| |
friendly way
|
|
|
|
|
|
|
|
|
| |
In case of heavy load and high rps current thread pool implementation seems to have problems at least with contention on lock inside condvar (long futex wait calls from http server listener thread), so try to implement something more efficient:
- replace condvar with TEventCounter implementation without internal lock (pthread condvar maintains waiters wakeup order, thread pool doesn't need it);
- introduce well-known bounded mpmc queue over ring buffer;
- get rid of TDecrementingWrapper;
- add options to turn on new pool in library/cpp/http/server and search/daemons (will remove after adoption);
- make elastic queue ut check both versions;
- workaround problems with android/arm build targets.
|
| |
|
| |
|
|
|
|
| |
uaas""
|