k***@rockwellcollins.com
2008-09-17 23:45:00 UTC
It's apparently doing a diff which is what all that hard drive and CPU
activity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've read
that it uses /tmp but when I made /tmp a symlink to a large drive, it
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating the
temporary file, but I know that re-compiling with everything and using
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact config
options, I can see if I can dig them up for you when I get back to the
computer I used to compile it.
I would be interested in the configure options, since I compile fromactivity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've read
that it uses /tmp but when I made /tmp a symlink to a large drive, it
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating the
temporary file, but I know that re-compiling with everything and using
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact config
options, I can see if I can dig them up for you when I get back to the
computer I used to compile it.
source on our Solaris server. The windows one is stock apache 2.2.9
and from the svn .zip file. I was planning on upgrading to svn 1.5.2,
to see if that made and difference from 1.5.1, but haven't had the time
yet.
Since the 413 error is from apache, I would assume the apache/apr
options are the most important...
Kevin R.
--Adam
toI upgraded to apache httpd 2.2.9, apr-1.3.3 and apr-util-1.3.4
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Invalid
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could not
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it was
taking effect, and that resulted in a much quicker failure (about 17
seconds instead of 8minutes). So I know the directive is being set
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Invalid
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could not
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it was
taking effect, and that resulted in a much quicker failure (about 17
seconds instead of 8minutes). So I know the directive is being set
0 properly.
I tried it from the server itself and connected to https://localhost
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server side. I
was using subversion 1.5.1 on both machines, but they were compiled
with different options. I'll look into it tonight when I get back
from work.
I can reproduce this with apache 2.2.8/svn 1.5.1 on both windows andI tried it from the server itself and connected to https://localhost
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server side. I
was using subversion 1.5.1 on both machines, but they were compiled
with different options. I'll look into it tonight when I get back
from work.
solaris as servers using both TortoiseSVN 1.5.2 and the svn 1.5.1
command line on windows using a single 4G file.
2G files worked fine in the past with svn 1.4.6. Not sure I ever
tried one quite this large before.
The thing I noticed is the client is thrashing the disk for about 8
minutes and it never seems to send much network traffic during
the time before it fails...
Kevin R.
--Adam
I'm using subversion 1.5.1 and am unable to upload large files (the
one I'm trying to upload is over 4GB). It looks like subversion
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to
one I'm trying to upload is over 4GB). It looks like subversion
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to
the file.
I was looking through the archives and it looks like the only
I was looking through the archives and it looks like the only
http://subversion.tigris.org/servlets/ReadMsg?listName=users&msgNo=73988
According to the manual, the solution is to use the
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Does subversion support files over 2GB? If so, can someone point
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Does subversion support files over 2GB? If so, can someone point
in the right direction on what I'm doing wrong in my config?
LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam
---------------------------------------------------------------------LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam