SOGo | BTS

View Issue Details Jump to Notes ] Related Changesets ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0003510SOGoBackend Mailpublic2016-02-09 08:182017-01-09 15:49
ReporterJens Erat 
Assigned Toludovic 
PriorityhighSeverityfeatureReproducibilityalways
StatusresolvedResolutionfixed 
PlatformOSOS Version
Product Version2.3.7 
Target VersionFixed in Version3.2.5 
Summary0003510: DOS through uploading large attachments
DescriptionSOGo seems to have a memory leak with attachment uploads, possibly only considering attachments that could not be stored on the IMAP server (because the resulting mail would be too large and is denied).

It is possible to crash the whole service/server by (repeatedly) uploading attachments. Consider the uploaded Zabbix memory graph. The test machine used for reproducing the issue is equipped with 2GB of RAM and 16GB of swap space.

The same issue probably also happened multiple times on our production server, not because of an attacker, but because of normal users with failed uploads. On our live setup, free memory gradually gets consumed and later swapped out. After some time, service performance suddenly drops to barely usable, probably after the first non-garbage information is swapped out.

Once the upload completed, it looks like SOGo copies the attachment (into memcached?) and then eleminates the copy in the sogod. The memcached copy stays forever/until the SOGo service is restarted.
Steps To Reproduce1. Create a large file, for example `dd if=/dev/zero of=/tmp/1GB bs=1M count=1000`
2. Open new mail in SOGo, try to attach large file
3. If attachment fails, some memory gets freed, but not all of it
4. Repeat 1-3 until server crashes
TagsNo tags attached.
Attached Filespng file icon zabbix-memory.png [^] (83,067 bytes) 2016-02-09 08:18


png file icon zabbix-memory-production.png [^] (105,569 bytes) 2016-02-09 08:18


log file icon sogod.log [^] (31,557 bytes) 2016-02-12 04:48

- Relationships Relation Graph ] Dependency Graph ]

-  Notes
User avatar (0009488)
ludovic (administrator)
2016-02-10 08:40

I've just tried a few times with different file sizes. I got the following errors:

1GB:

2016-02-10 08:28:35.298 sogod[8139] ERROR: could not parse body part at index 10 in multipart <HttpRequest: method=POST uri=/SOGo/so/sogo1/Mail/0/folderDrafts/newDraft1455110765-1/save header={accept = "application/json, text/javascript, */*; q=0.01"; "accept-encoding" = "gzip, deflate"; "accept-language" = "en-US,en;q=0.8,fr;q=0.6"; "content-length" = 1048577226; "content-type" = "multipart/form-data; boundary=----WebKitFormBoundarymrXtDVSekpUhFByq"; cookie = "0xHIGHFLYxSOGo=basic%20VTZWNWR4MEduOU00VjErMEx4S2JkU0pqV1dnZlpnVk1Nb0E5SDNDVWxiNk5pdWVCY25ENWJETllkd3JzTTZ5UmN6VDdZVU8yWUQ4SU1na3FwVEFZcXFadis0WHZBMStvQ0d2dEoyTThQRUNtMi9zaWxtR3NQQ0lHSVBCZXhyOVhTenFCSVMwZXNVWlAyelZDNENPaXhqUXh2c1RsZ21ISDlYU0NadU5qWGlhZCtzdUdUR3B6My9BR3VoVFhwUWRvNVcwTEZYNGxhc0NELzluUWlGQ1VGdz09OlpOVG5VaFFxcmc1Z1QzdWpub1RQcEE9PQ%3D%3D"; host = sogoludo; origin = "http://sogoludo"; [^] referer = "http://sogoludo/SOGo/so/sogo1/Mail/0/folderDrafts/newDraft1455110765-1/edit"; [^] "user-agent" = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.82 Safari/537.36"; "x-forwarded-for" = "172.20.20.42"; "x-forwarded-host" = sogoludo; "x-forwarded-server" = "sogo.example.com"; "x-requested-with" = XMLHttpRequest; "x-webobjects-remote-user" = "(null)"; "x-webobjects-server-port" = 80; "x-webobjects-server-protocol" = "HTTP/1.0"; } body=(null)>: <NSException: 0x7fff64000930> NAME:NSMallocException REASON:Default zone has run out of memory INFO:(null)

256MB:

Feb 10 08:36:10 sogod [8139]: <0x0x7fff64013210[NGImap4Client]> ERROR(-[NGImap4Client _processUnknownCommandParserException:]): catched non-IMAP4 parsing exception NSMallocException: Default zone has run out of memory

128MB:

2016-02-10 08:37:19.249 sogod[8139] WARNING(-[NGBufferedStream writeBytes:count:]): got passed in length 175MB (183929292 bytes, errcode=2147483647) ...

In all three cases, sogod's memory consumption stayed stable:

sogo 8139 12.0 0.8 330460 17232 pts/0 S+ 08:25 1:41 /usr/local/sbin/sogod -WOUseWatchDog NO -WONoDetach YES -WOPort 20000 -WOWorkersCount 1 -WOLogFile - -WOPidFile /tmp/sogo.pidr

So I suspect this is happening at an other stage.

What errors are you getting?
(0009489)
Jens Erat (reporter)
2016-02-10 09:14

We had some error messages that processes were killed because using more than the maximum amount of memory configured in the SOGo configuration (512MB for us).

I'm out of office today and tomorrow, and cannot reproduce it on my home instance (but this has a different setup than the university systems). Will have to report back on Friday.

We could observe this issue on both our test and producation servers. They both have a memcached attached, and the frontend Apache server runs on another machine. I think I observed an increased memory consumption on memcached, but am not sure about that. Will have to reproduce the issue again to monitor that (and this time, better watch at not completely DOSing the test instance by accident).

Does the behavior change if you set WOUseWatchDog yes and some WOWorkersCount larger one?
User avatar (0009514)
Christian Mack (developer)
2016-02-12 04:11

From our test machine:

2016-02-09 13:26:42.346 sogod[22771] WARNING(-[NGBufferedStream writeBytes:count:]): got passed in length 274MB (287388529 bytes, errcode=2147483647) ...
Feb 09 13:27:17 sogod [22753]: [WARN] <0x0x2b53ff73bab8[WOWatchDogChild]> pid 22771 has been hanging in the same request for 2 minutes
Feb 09 13:28:41 sogod [22771]: [ERROR] <0x0x2b540d5d8778[WOHttpTransaction]> client disconnected during delivery of response for <WORequest[0x0x2b540d633f98]: method=POST uri=/SOGo/so/pop15959/Mail/0/folderDrafts/newDraft1455020593-1/save app=SOGo rqKey=so rqPath=pop15959/Mail/0/folderDrafts/newDraft1455020593-1/save> (len=382): the socket was shutdown
Feb 09 13:28:42 sogod [22771]: 134.34.1.247 "POST /SOGo/so/pop15959/Mail/0/folderDrafts/newDraft1455020593-1/save HTTP/1.0" 200 382/2677 204.530 - - -106072K
Feb 09 13:30:34 sogod [22753]: [ERROR] <0x0x2b53ff73bab8[WOWatchDogChild]> FAILURE receiving status for child 22771
Feb 09 13:30:34 sogod [22753]: [ERROR] <0x0x2b53ff73bab8[WOWatchDogChild]> socket: <NGActiveSocket[0x0x2b540a7ff1f8]: mode=rw address=(null) connectedTo=<0x0x2b53ff713238[NGLocalSocketAddress]: /tmp/_ngsocket_22753_0x2b5406a12cf8_000> receive-timeout=1.000s>
Feb 09 13:30:35 sogod [22753]: [ERROR] <0x0x2b53ff73bab8[WOWatchDogChild]> exception: <NGSocketTimedOutException: 0x2b53ff7542f8> NAME:NGSocketTimedOutException REASON:the socket was shutdown INFO:{errno = 11; error = "Resource temporarily unavailable"; stream = "{object = <NGActiveSocket[0x0x2b540a7ff1f8]: mode=rw address=(null) connectedTo=<0x0x2b53ff713238[NGLocalSocketAddress]: /tmp/_ngsocket_22753_0x2b5406a12cf8_000> receive-timeout=1.000s>;}"; }
Feb 09 13:30:35 sogod [22753]: <0x0x2b53ff73bab8[WOWatchDogChild]> sending terminate signal to pid 22771
Feb 09 13:30:35 sogod [22753]: [ERROR] <0x0x2b53ff73bab8[WOWatchDogChild]> FAILURE notifying child 22771
Feb 09 13:30:35 sogod [22753]: <0x0x2b53ff73bab8[WOWatchDogChild]> sending terminate signal to pid 22771
Feb 09 13:31:37 sogod [22753]: [WARN] <0x0x2b53ff73bab8[WOWatchDogChild]> pid 22771 has been hanging in the same request for 1 minutes
Feb 09 13:32:40 sogod [22753]: [WARN] <0x0x2b53ff73bab8[WOWatchDogChild]> pid 22771 has been hanging in the same request for 2 minutes
Feb 09 13:33:36 sogod [22753]: [WARN] <0x0x2b53ff73bab8[WOWatchDogChild]> pid 22771 has been hanging in the same request for 3 minutes
Feb 09 13:34:36 sogod [22753]: [WARN] <0x0x2b53ff73bab8[WOWatchDogChild]> pid 22771 has been hanging in the same request for 4 minutes
Feb 09 13:35:36 sogod [22753]: [WARN] <0x0x2b53ff73bab8[WOWatchDogChild]> pid 22771 has been hanging in the same request for 5 minutes

Then no log entries anymore, machine dead with 100% CPU (2 cores) and swapping till no space left.
4 hours later we rebooted the machine.
(0009516)
Jens Erat (reporter)
2016-02-12 04:51

I reproduced the issue -- uploading a single 1GB file.

I uploaded a log excerpt starting from the time I uploaded the attachment. Interestingly, one process after the other gets occupied and dies after some time -- without the attachment being uploaded again.

I'm sorry I forgot to put timestamps near the top outputs, but they're chosen at arbitrary times anyway (not related to any special events in the log). At least they are in order, and one can correlate what's going on using the process IDs.
(0009517)
Jens Erat (reporter)
2016-02-12 05:04

After restarting, the problem occured again. Deleting them and starting the SOGo service again finally resolved the issue (until uploading a large attachment again).

-rw------- 1 sogo sogo 1,4G 12. Feb 10:12 OGo0F7156BDA1E501.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:14 OGo0F7156BDA28502.tmp
-rw------- 1 sogo sogo 291M 12. Feb 10:45 OGo0F7556BDA9A401.tmp
-rw------- 1 sogo sogo 1,1G 12. Feb 10:10 OGo0F7656BDA18701.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:29 OGo0F7B56BDA5EC01.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:17 OGo0F8856BDA34601.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:24 OGo0F8856BDA4B502.tmp
-rw------- 1 sogo sogo 6,9M 9. Feb 13:35 OGo2E5756B9DD1D01.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:29 OGo53B056BDA5ED01.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:35 OGo53B056BDA76702.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:33 OGo540E56BDA6E301.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:38 OGo540E56BDA81B02.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:57 OGo57A056BDAC7801.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:50 OGo57A356BDAAFA01.tmp
-rw------- 1 sogo sogo 1,4G 12. Feb 10:52 OGo57A356BDAB5A02.tmp
User avatar (0009520)
ludovic (administrator)
2016-02-12 07:31

If you try to reproduce the issue on a very fast LAN connection to the SOGo server, does that work?

I guess you have the issue when a user uploads a very large file and the transfer takes longer than the watchdog timeout.
(0009522)
Jens Erat (reporter)
2016-02-12 07:38

The connection speed is not an issue, we're connected with Fast Ethernet (and sometimes already Gigabit) The 1GB attachment was completely uploaded in about a minute, and then the upload fails with an error message in the browser that the message could not be stored.

The same happens if uploading smaller chunks that are transmitted in few seconds, but you need to upload more of them for notable memory occupation.
User avatar (0009525)
ludovic (administrator)
2016-02-12 08:39

In the trace above, we have:

Feb 09 13:28:41 sogod [22771]: [ERROR] <0x0x2b540d5d8778[WOHttpTransaction]> client disconnected during delivery of response for <WORequest[0x0x2b540d633f98]: method=POST uri=/SOGo/so/pop15959/Mail/0/folderDrafts/newDraft1455020593-1/save app=SOGo rqKey=so rqPath=pop15959/Mail/0/folderDrafts/newDraft1455020593-1/save> (len=382): the socket was shutdown


Do you have a proxy in front of sogod that could drop the connection?
User avatar (0009526)
Christian Mack (developer)
2016-02-12 08:53

We have only the Apache revers proxy in front of it (on another machine), which handels HTTPS, delivers the static content and does the normal revers translations.
There are no size limits defined in its config.
On the test server there is a time limit of one minute, on the production server of 15 minutes in the ProxyPass directive.
User avatar (0009527)
ludovic (administrator)
2016-02-12 09:07

Would it be possible for me to have root access with gdb on the test machine and you reproduce the problem while I hook on the sogod processes with gdb?
User avatar (0009528)
Christian Mack (developer)
2016-02-12 09:25

Ok, I assume your IP and Key from 2015-02-09 are still valid?

But the security team is already gone into the weekend. Therefore the firewall can not be opend before next week.
User avatar (0009529)
ludovic (administrator)
2016-02-12 10:02

I think I have a solution to propose you.

1- we introduce a SOPE parameter to limit the POST/PUT size

2- if SOPE sees it, it doesn't try to parse the mulitpart data being uploaded and return a proper exception to the HTTP client

3- the server process at this point is immediately made available, no big memory consumption is required

4- when the HTTP client (like the SOGo's web interface) finishes the upload (in no man's land, really), it'll get the return code generated in step. no 2 and tell the client "You've uploaded a file that is too big."

What do you think?
(0009530)
Jens Erat (reporter)
2016-02-12 10:24

What happens if:

- There is no size value attached to the file uploaded?
- A large message body is posted instead of a large attachment?
- A possible attacker (or lots of normal users) repeatedly upload attachments with allowed size, but dropping connections? Two smaller attachments but being too large for sending to the IMAP server might still result in the same effect.

A warning message for users trying to upload whole DVD images and wondering why it fails all the time (happened not only once!) would be a great thing, but I think the memory leak is another issue that also has to be addressed.

If you have trouble sending large attachments for debugging (we have decent internet connection at our university, so if your upstream is large enough, you should be fine anyway), feel free to get in touch with either of us (mail addresses are also valid XMPP aliases) to attach one locally.
User avatar (0009531)
ludovic (administrator)
2016-02-12 11:11

To answers your questions:

1- SOPE exposes the Content-Length. Any HTTP 1.1 client should do that during a POST/PUT - otherwise right now, SOPE will reject its request. HTTP 0.9 and 1.0 client won't even work with SOGo anyway

2- we are talking about setting a limit here to a fairly large value - say 100MB or even more. You want to protect the application from any large upload, not just file attachments so that fix is important

3- I don't understand your point "too large from being sent to the IMAP server". If there's a limit there, it can be honoured in SOGo

I've not been able to see if a memory leak is present - ALL tests I have done shown no leak - just large memory consumption due to the data needing to be encoded there and there for multiple operations in SOGo.
(0009532)
Jens Erat (reporter)
2016-02-12 11:22

Your points regarding (1) and (2) seem legit, didn't consider that this limit will consider the whole request size.

With respect to (3), users might hit message size limitations and/or quota rules when uploading a new mail as draft/sent mail. This might be triggering the "memory leak", whatever this is, at least it reproducably occupies memory and does not release it again.

Have a look at the top output and log I uploaded. I sent exactly one HTTP request uploading a 1GB file. This occupied multiple sogod workers, one after the other. All originally occupied ones have been killed after some time, and others take over the job.

I'm not sure this is also the exactly same issue we see on the production machine (which loses a lot of memory over time, which is not freed again, not even during low traffic hours; until it starts swapping after a week or two and gets horribly slow). At least there's the indication that we observed a massive amount of memory lost after a user tried to upload a rather large file (and failed with it).
User avatar (0009533)
ludovic (administrator)
2016-02-12 11:24

I'll try to set a small quota (say 10MB) and upload a 50MB file. It should fail and expose the issue if any.

Assuming there's an issue and it is fixed regarding the quota, we'll go ahead and implement the proposed solution which should fix the issue you are facing.
User avatar (0009534)
ludovic (administrator)
2016-02-12 11:43

I see no memory leak when trying to attach a file and going over quota.
(0009561)
Jens Erat (reporter)
2016-02-17 06:01

The patch (updated from the nightly repository, which already includes it) did not fully resolve the issue. I set the maximum allowed attachment size to 30MB.

When uploading single large attachments, the issue seems indeed resolved (users trying to upload DVD images do not crash the system any more).

When uploading a large number of 25MB attachments, the system still crashes. Some of the more interesting log lines:
2016-02-17 11:33:25.047 sogod[14835] WARNING(-[NGBufferedStream writeBytes:count:]): got passed in length 102MB (107771571 bytes, errcode=2147483647) ...
Feb 17 11:33:31 sogod [14813]: [WARN] <0x0x2b5c7eb39878[WOWatchDogChild]> pid 14837 has been hanging in the same request for 1 minutes
Feb 17 11:33:33 sogod [14813]: [WARN] <0x0x2b5c7eb12698[WOWatchDogChild]> pid 14842 has been hanging in the same request for 1 minutes
Feb 17 11:33:34 sogod [14813]: [WARN] <0x0x2b5c7eb2fc78[WOWatchDogChild]> pid 14840 has been hanging in the same request for 1 minutes
Feb 17 11:33:34 sogod [14813]: [WARN] <0x0x2b5c7eb121f8[WOWatchDogChild]> pid 14843 has been hanging in the same request for 1 minutes
Feb 17 11:33:34 sogod [14813]: [WARN] <0x0x2b5c7eafed18[WOWatchDogChild]> pid 14845 has been hanging in the same request for 1 minutes
Feb 17 11:33:34 sogod [14813]: [WARN] <0x0x2b5c7eb34308[WOWatchDogChild]> pid 14835 has been hanging in the same request for 1 minutes
2016-02-17 11:33:34.102 sogod[14845] WARNING(-[NGBufferedStream writeBytes:count:]): got passed in length 137MB (143695238 bytes, errcode=2147483647) ...
Feb 17 11:33:43 sogod [14813]: [ERROR] <0x0x2b5c7eb2e2d8[WOWatchDogChild]> FAILURE receiving status for child 14839
Feb 17 11:33:43 sogod [14813]: [ERROR] <0x0x2b5c7eb2e2d8[WOWatchDogChild]> socket: <NGActiveSocket[0x0x2b5c7eb2fbc8]: mode=rw address=(null) connectedTo=<0x0x2b5c7eaf3d88[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>
Feb 17 11:33:44 sogod [14813]: [ERROR] <0x0x2b5c7eb2e2d8[WOWatchDogChild]> exception: <NGSocketTimedOutException: 0x2b5c89c13058> NAME:NGSocketTimedOutException REASON:the socket was shutdown INFO:{errno = 11; error = "Die Ressource ist zur Zeit nicht verf\U00FCgbar"; stream = "{object = <NGActiveSocket[0x0x2b5c7eb2fbc8]: mode=rw address=(null) connectedTo=<0x0x2b5c7eaf3d88[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>;}"; }
Feb 17 11:33:44 sogod [14813]: <0x0x2b5c7eb2e2d8[WOWatchDogChild]> sending terminate signal to pid 14839
Feb 17 11:33:44 sogod [14813]: [ERROR] <0x0x2b5c7eb2e2d8[WOWatchDogChild]> FAILURE notifying child 14839
Feb 17 11:33:44 sogod [14813]: <0x0x2b5c7eb2e2d8[WOWatchDogChild]> sending terminate signal to pid 14839
Feb 17 11:33:46 sogod [14813]: [ERROR] <0x0x2b5c7eb41ef8[WOWatchDogChild]> FAILURE receiving status for child 14826
Feb 17 11:33:46 sogod [14813]: [ERROR] <0x0x2b5c7eb41ef8[WOWatchDogChild]> socket: <NGActiveSocket[0x0x2b5c7eb3f8e8]: mode=rw address=(null) connectedTo=<0x0x2b5c7eb12518[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>
Feb 17 11:33:46 sogod [14813]: [ERROR] <0x0x2b5c7eb41ef8[WOWatchDogChild]> exception: <NGSocketTimedOutException: 0x2b5c7ebc3fa8> NAME:NGSocketTimedOutException REASON:the socket was shutdown INFO:{errno = 11; error = "Die Ressource ist zur Zeit nicht verf\U00FCgbar"; stream = "{object = <NGActiveSocket[0x0x2b5c7eb3f8e8]: mode=rw address=(null) connectedTo=<0x0x2b5c7eb12518[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>;}"; }
Feb 17 11:33:46 sogod [14813]: <0x0x2b5c7eb41ef8[WOWatchDogChild]> sending terminate signal to pid 14826

Feb 17 11:35:01 sogod [14813]: [ERROR] <0x0x2b5c7eb34308[WOWatchDogChild]> exception: <NGSocketTimedOutException: 0x2b5c8a5e1468> NAME:NGSocketTimedOutException REASON:the socket was shutdown INFO:{errno = 11; error = "Die Ressource ist zur Zeit nicht verf\U00FCgbar"; stream = "{object = <NGActiveSocket[0x0x2b5c89c16928]: mode=rw address=(null) connectedTo=<0x0x2b5c7eaedcd8[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>;}"; }


Feb 17 11:35:59 sogod [14843]: [ERROR] <0x0x2b5c7ede32f8[WOHttpTransaction]> client disconnected during delivery of response for <WORequest[0x0x2b5c8cbed1b8]: method=POST uri=/SOGo/so/pop15959/Mail/0/folderDrafts/newDraft1455705129-1/save app=SOGo rqKey=so rqPath=pop15959/Mail/0/folderDrafts/newDraft1455705129-1/save> (len=381): the socket was shutdown
Feb 17 11:36:01 sogod [14843]: 134.34.1.247 "POST /SOGo/so/pop15959/Mail/0/folderDrafts/newDraft1455705129-1/save HTTP/1.0" 200 381/2676 211.631 - - -55700K
Feb 17 11:36:01 sogod [14813]: [ERROR] <0x0x2b5c7eb12698[WOWatchDogChild]> FAILURE receiving status for child 14842
Feb 17 11:36:02 sogod [14813]: [ERROR] <0x0x2b5c7eb12698[WOWatchDogChild]> socket: <NGActiveSocket[0x0x2b5c7eb6ef88]: mode=rw address=(null) connectedTo=<0x0x2b5c7eadc758[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>
EXCEPTION: <NSException: 0x2b5c901a2ad8> NAME:NSInvalidArgumentException REASON:Tried to add nil to array INFO:(null)
Feb 17 11:36:02 sogod [14813]: [ERROR] <0x0x2b5c7eb12698[WOWatchDogChild]> exception: <NGSocketTimedOutException: 0x2b5c89c14d38> NAME:NGSocketTimedOutException REASON:the socket was shutdown INFO:{errno = 11; error = "Die Ressource ist zur Zeit nicht verf\U00FCgbar"; stream = "{object = <NGActiveSocket[0x0x2b5c7eb6ef88]: mode=rw address=(null) connectedTo=<0x0x2b5c7eadc758[NGLocalSocketAddress]: /tmp/_ngsocket_14813_0x2b5c85e0bd98_000> receive-timeout=1.000s>;}"; }
EXCEPTION: <NSException: 0x2b5c901a2ad8> NAME:NSInvalidArgumentException REASON:Tried to add nil to array INFO:(null)
Feb 17 11:36:03 sogod [14813]: <0x0x2b5c7eb12698[WOWatchDogChild]> sending terminate signal to pid 14842
Feb 17 11:36:03 sogod [14813]: [ERROR] <0x0x2b5c7eb12698[WOWatchDogChild]> FAILURE notifying child 14842
Feb 17 11:36:03 sogod [14813]: <0x0x2b5c7eb12698[WOWatchDogChild]> sending terminate signal to pid 14842
Feb 17 11:36:03 sogod [14813]: <0x0x2b5c7eb88008[WOWatchDogChild]> child 14816 exited
Feb 17 11:36:02 sogod [14835]: [ERROR] <0x8CC67F68[SOGoDraftObject]:newDraft1455705129-1> did not find attachment: '25MB-138jDnDm'
Feb 17 11:36:03 sogod [14813]: <0x0x2b5c7eb88008[WOWatchDogChild]> (terminated due to signal 6)

Deep in swapping, I had to killall -9 them and clean up /tmp/OGo* before restarting SOGo.
User avatar (0009563)
ludovic (administrator)
2016-02-17 19:10

Are you sure you updated SOPE?

Because what you sent does not make sense. The worker should NOT be hanging a few minutes, it should NOT try to read the content after seeing that the Content-Length is greater than 30MB.

ALL my tests worked regarding that.
(0009564)
Jens Erat (reporter)
2016-02-18 03:40

I think that the upload limit is indeed working -- it blocks single, large attachments.

It does not seem to work for multiple, smaller attachments -- each of them is small enough, but together they're still too large for single mails (and there is some kind of memory leak).

I'm not sure whether this is a "traditional" memory leak: Cleaning up the /tmp/OGo* files seems required sometimes when restarting the service. Is it possible SOGo picks those up automatically when restarted, and tries to perform again whatever it failed at before (resulting in those files being stuck)?

We installed both sope and SOGo from your nightly repository on our test server:

# yum list *sope*
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.jgotteswinter.com
 * extras: ftp-stud.fht-esslingen.de
 * rpmforge: mirror1.hs-esslingen.de
 * updates: ftp-stud.fht-esslingen.de
Installed Packages
sope49-appserver.x86_64 4.9-20160217_1664.el5.1 installed
sope49-cards.x86_64 2.3.8.20160217-1.el5 installed
sope49-core.x86_64 4.9-20160217_1664.el5.1 installed
sope49-debuginfo.x86_64 4.9-20160217_1664.el5.1 installed
sope49-gdl1.x86_64 4.9-20160217_1664.el5.1 installed
sope49-gdl1-contentstore.x86_64 2.3.8.20160217-1.el5 installed
sope49-gdl1-postgresql.x86_64 4.9-20160217_1664.el5.1 installed
sope49-ldap.x86_64 4.9-20160217_1664.el5.1 installed
sope49-mime.x86_64 4.9-20160217_1664.el5.1 installed
sope49-sbjson.x86_64 2.3.1-20160217_1664.el5.1 installed
sope49-xml.x86_64 4.9-20160217_1664.el5.1 installed
(0009565)
Jens Erat (reporter)
2016-02-18 04:14

I had another try with the 20160218 packages, problem still exists.
User avatar (0009567)
ludovic (administrator)
2016-02-18 20:50

Do you upload them one by one or all at the same time?

What limit have you defined?
(0009581)
Jens Erat (reporter)
2016-02-23 04:45

The maximum size is still 30MB on the test server. I uploaded a bunch of 25MB files at the same time.
User avatar (0011114)
ludovic (administrator)
2016-12-28 09:08

The problem is that the message keeps growing when you do that. For example, if your WOMaxUploadSize is set to 25MB and you upload 20 files files of 25MB (for 500MB), it'll work because your browser will do 20 uploads of 25MB files.

Now where it gets tricky, is that the email message on the SOGo server will grow each time an upload is completed, by 25 MB - because the mail is being saved to the IMAP's draft folder every time an attachment upload is completed.

Now that takes memory, A LOT of it. a 500MB message will need at least 1 to 1.5GB of memory to be correctly saved on the IMAP server - that is due to MIME generation of the message before being saved to the IMAP server, and also saving operation to the IMAP server.

I suggest we add SOGoMaximumMessageSize - so that whenever that limit is reached, SOGo will prevent any further attachment addition.

- Related Changesets
sogo: v2 32bb1456
Timestamp: 2016-02-16 16:28:11
Author: ludovic
Details ] Diff ]
(feat) attachment size upload using WOMaxUploadSize (fixes 0003510 and 0003135)
mod - NEWS Diff ] File ]
mod - UI/MailerUI/UIxMailEditor.m Diff ] File ]
sogo: v2 3f68fb7b
Timestamp: 2016-12-28 10:18:10
Author: ludovic
Details ] Diff ]
(feat) added new SOGoMaximumMessageSizeLimit config parameter (fixes 0003510)
mod - SoObjects/Mailer/SOGoDraftObject.m Diff ] File ]
mod - SoObjects/SOGo/SOGoSystemDefaults.h Diff ] File ]
mod - SoObjects/SOGo/SOGoSystemDefaults.m Diff ] File ]
sogo: master 2cebee42
Timestamp: 2016-12-28 10:18:10
Author: ludovic
Details ] Diff ]
(feat) added new SOGoMaximumMessageSizeLimit config parameter (fixes 0003510)

Conflicts:

SoObjects/Mailer/SOGoDraftObject.m
mod - Documentation/SOGoInstallationGuide.asciidoc Diff ] File ]
mod - SoObjects/Mailer/SOGoDraftObject.m Diff ] File ]
mod - SoObjects/SOGo/SOGoSystemDefaults.h Diff ] File ]
mod - SoObjects/SOGo/SOGoSystemDefaults.m Diff ] File ]
sogo: maint 81df371e
Timestamp: 2016-12-28 10:18:10
Author: ludovic
Committer: francis
Details ] Diff ]
(feat) added new SOGoMaximumMessageSizeLimit config parameter (fixes 0003510)
mod - SoObjects/Mailer/SOGoDraftObject.m Diff ] File ]
mod - SoObjects/SOGo/SOGoSystemDefaults.h Diff ] File ]
mod - SoObjects/SOGo/SOGoSystemDefaults.m Diff ] File ]

- Issue History
Date Modified Username Field Change
2016-02-09 08:18 Jens Erat New Issue
2016-02-09 08:18 Jens Erat File Added: zabbix-memory.png
2016-02-09 08:18 Jens Erat File Added: zabbix-memory-production.png
2016-02-10 08:40 ludovic Note Added: 0009488
2016-02-10 09:14 Jens Erat Note Added: 0009489
2016-02-12 04:11 Christian Mack Note Added: 0009514
2016-02-12 04:48 Jens Erat File Added: sogod.log
2016-02-12 04:51 Jens Erat Note Added: 0009516
2016-02-12 05:04 Jens Erat Note Added: 0009517
2016-02-12 07:31 ludovic Note Added: 0009520
2016-02-12 07:38 Jens Erat Note Added: 0009522
2016-02-12 08:39 ludovic Note Added: 0009525
2016-02-12 08:53 Christian Mack Note Added: 0009526
2016-02-12 09:07 ludovic Note Added: 0009527
2016-02-12 09:25 Christian Mack Note Added: 0009528
2016-02-12 10:02 ludovic Note Added: 0009529
2016-02-12 10:24 Jens Erat Note Added: 0009530
2016-02-12 11:11 ludovic Note Added: 0009531
2016-02-12 11:22 Jens Erat Note Added: 0009532
2016-02-12 11:24 ludovic Note Added: 0009533
2016-02-12 11:43 ludovic Note Added: 0009534
2016-02-16 16:29 ludovic Changeset attached => sogo v2 32bb1456
2016-02-16 16:29 ludovic Assigned To => ludovic
2016-02-16 16:29 ludovic Resolution open => fixed
2016-02-16 16:30 ludovic Status new => resolved
2016-02-16 16:30 ludovic Fixed in Version => 2.3.9
2016-02-17 06:01 Jens Erat Note Added: 0009561
2016-02-17 06:01 Jens Erat Status resolved => feedback
2016-02-17 06:01 Jens Erat Resolution fixed => reopened
2016-02-17 19:10 ludovic Note Added: 0009563
2016-02-18 03:40 Jens Erat Note Added: 0009564
2016-02-18 03:40 Jens Erat Status feedback => assigned
2016-02-18 04:14 Jens Erat Note Added: 0009565
2016-02-18 20:50 ludovic Note Added: 0009567
2016-02-23 04:45 Jens Erat Note Added: 0009581
2016-12-28 08:03 ludovic Severity crash => major
2016-12-28 09:08 ludovic Note Added: 0011114
2016-12-28 09:08 ludovic Severity major => feature
2016-12-28 10:19 ludovic Changeset attached => sogo v2 3f68fb7b
2016-12-28 10:24 ludovic Changeset attached => sogo master 2cebee42
2016-12-28 10:26 ludovic Status assigned => resolved
2016-12-28 10:26 ludovic Fixed in Version 2.3.9 => 3.2.5
2016-12-28 10:26 ludovic Resolution reopened => fixed
2017-01-09 15:49 francis Changeset attached => sogo maint 81df371e


Copyright © 2000 - 2019 MantisBT Team
Powered by Mantis Bugtracker