Discussion:
413 Request Entity Too Large
k***@rockwellcollins.com
2008-09-17 23:45:00 UTC
Permalink
It's apparently doing a diff which is what all that hard drive and CPU
activity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've read
that it uses /tmp but when I made /tmp a symlink to a large drive, it
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating the
temporary file, but I know that re-compiling with everything and using
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact config
options, I can see if I can dig them up for you when I get back to the
computer I used to compile it.
I would be interested in the configure options, since I compile from
source on our Solaris server. The windows one is stock apache 2.2.9
and from the svn .zip file. I was planning on upgrading to svn 1.5.2,
to see if that made and difference from 1.5.1, but haven't had the time
yet.

Since the 413 error is from apache, I would assume the apache/apr
options are the most important...

Kevin R.
--Adam
I upgraded to apache httpd 2.2.9, apr-1.3.3 and apr-util-1.3.4
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Invalid
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could not
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it was
taking effect, and that resulted in a much quicker failure (about 17
seconds instead of 8minutes). So I know the directive is being set
to
0 properly.
I tried it from the server itself and connected to https://localhost
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server side. I
was using subversion 1.5.1 on both machines, but they were compiled
with different options. I'll look into it tonight when I get back
from work.
I can reproduce this with apache 2.2.8/svn 1.5.1 on both windows and
solaris as servers using both TortoiseSVN 1.5.2 and the svn 1.5.1
command line on windows using a single 4G file.
2G files worked fine in the past with svn 1.4.6. Not sure I ever
tried one quite this large before.
The thing I noticed is the client is thrashing the disk for about 8
minutes and it never seems to send much network traffic during
the time before it fails...
Kevin R.
--Adam
I'm using subversion 1.5.1 and am unable to upload large files (the
one I'm trying to upload is over 4GB). It looks like subversion
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to
import
the file.
I was looking through the archives and it looks like the only
solution
http://subversion.tigris.org/servlets/ReadMsg?listName=users&msgNo=73988
According to the manual, the solution is to use the
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Does subversion support files over 2GB? If so, can someone point
me
in the right direction on what I'm doing wrong in my config?
LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam
---------------------------------------------------------------------
k***@rockwellcollins.com
2008-09-18 14:48:41 UTC
Permalink
The error message is from apache, but if the client is sending the
wrong size in the header, it would cause the error by no fault of
apache.
Excellent point. I was blaming the server, but it could be the client.
(Or even both could be wrong.)

Now I'm using stock apache 2.2.9 (from apache.org) on Windows server 2003
and the stock win32 svn 1.5.1 apache modules for apache 2.2 on one
server, and a self compiled apache 2.2.8 and svn 1.5.1 on the
solaris server.

I've tried the pre-compiled win32 1.5.1 distribution and TortoiseSVN 1.5.3
on Windows, and my self compiled command svn 1.5.1 executables on
solaris.

All the variations from client to server using the http protocol
fail with a 413 error on a 4G file. (svn protocol works fine)

I'm using apache 2.2.8 from source on solaris, since the apr 1.3
LDAP stuff seems to be quite broken in apache 2.2.9.

Looks like I'm using neon 0.28.2, so I'll update that when
I rebuild svn 1.5.2 on solaris and see if anything changes...

Thanks for the config info.

Kevin R.
Berkley DB 4.4.20
I didn't write down the options I used here, but I believe the only
option I gave this was --prefix=/usr
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc
--with-gnu-ld
apr-util-1.3.4
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc
--with-gnu-ld --with-apr=/usr --with-berkeley-db=/usr
neon-0.28.3
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --with-ssl --with-zlib --enable-shared
subversion-1.5.1
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --without-berkeley-db --without-zlib --without-jdk
--without-jikes --without-swig --without-junit
httpd-2.2.9
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--enable-ssl --enable-dav --enable-http --enable-so --enable-rewrite
To preempt the question... I'm not sure why I omitted BerkleyDB
support in subversion but included it in apr-util. At any rate, these
options worked for me on Linux 2.6.15.5; good luck with Solaris.
--Adam
Post by k***@rockwellcollins.com
It's apparently doing a diff which is what all that hard drive and
CPU
Post by k***@rockwellcollins.com
activity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've
read
Post by k***@rockwellcollins.com
that it uses /tmp but when I made /tmp a symlink to a large drive, it
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating the
temporary file, but I know that re-compiling with everything and
using
Post by k***@rockwellcollins.com
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact
config
Post by k***@rockwellcollins.com
options, I can see if I can dig them up for you when I get back to
the
Post by k***@rockwellcollins.com
computer I used to compile it.
I would be interested in the configure options, since I compile from
source on our Solaris server. The windows one is stock apache 2.2.9
and from the svn .zip file. I was planning on upgrading to svn 1.5.2,
to see if that made and difference from 1.5.1, but haven't had the
time yet.
Post by k***@rockwellcollins.com
Since the 413 error is from apache, I would assume the apache/apr
options are the most important...
Kevin R.
--Adam
I upgraded to apache httpd 2.2.9, apr-1.3.3 and apr-util-1.3.4
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187]
Invalid
Post by k***@rockwellcollins.com
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could
not
Post by k***@rockwellcollins.com
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it
was
Post by k***@rockwellcollins.com
taking effect, and that resulted in a much quicker failure (about
17
Post by k***@rockwellcollins.com
seconds instead of 8minutes). So I know the directive is being
set to
Post by k***@rockwellcollins.com
0 properly.
I tried it from the server itself and connected to
https://localhost
Post by k***@rockwellcollins.com
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server
side. I
Post by k***@rockwellcollins.com
was using subversion 1.5.1 on both machines, but they were
compiled
Post by k***@rockwellcollins.com
with different options. I'll look into it tonight when I get back
from work.
I can reproduce this with apache 2.2.8/svn 1.5.1 on both windows
and
Post by k***@rockwellcollins.com
solaris as servers using both TortoiseSVN 1.5.2 and the svn 1.5.1
command line on windows using a single 4G file.
2G files worked fine in the past with svn 1.4.6. Not sure I ever
tried one quite this large before.
The thing I noticed is the client is thrashing the disk for about 8
minutes and it never seems to send much network traffic during
the time before it fails...
Kevin R.
--Adam
I'm using subversion 1.5.1 and am unable to upload large files
(the
Post by k***@rockwellcollins.com
one I'm trying to upload is over 4GB). It looks like subversion
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to
import
Post by k***@rockwellcollins.com
the file.
I was looking through the archives and it looks like the only
solution
proposed was about client side certs (which I do not use).
Source:
http://subversion.tigris.org/servlets/ReadMsg?listName=users&msgNo=73988
Post by k***@rockwellcollins.com
According to the manual, the solution is to use the
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Does subversion support files over 2GB? If so, can someone
point me
Post by k***@rockwellcollins.com
in the right direction on what I'm doing wrong in my config?
Below are the relevant config options from my apache
LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam
---------------------------------------------------------------------
Adam Nichols
2008-09-18 02:49:13 UTC
Permalink
The error message is from apache, but if the client is sending the
wrong size in the header, it would cause the error by no fault of
apache.

Berkley DB 4.4.20
I didn't write down the options I used here, but I believe the only
option I gave this was --prefix=/usr

apr-1.3.3:
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc --with-gnu-ld

apr-util-1.3.4
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc
--with-gnu-ld --with-apr=/usr --with-berkeley-db=/usr

neon-0.28.3
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --with-ssl --with-zlib --enable-shared

subversion-1.5.1
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --without-berkeley-db --without-zlib --without-jdk
--without-jikes --without-swig --without-junit

httpd-2.2.9
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--enable-ssl --enable-dav --enable-http --enable-so --enable-rewrite

To preempt the question... I'm not sure why I omitted BerkleyDB
support in subversion but included it in apr-util. At any rate, these
options worked for me on Linux 2.6.15.5; good luck with Solaris.

--Adam
Post by k***@rockwellcollins.com
It's apparently doing a diff which is what all that hard drive and CPU
activity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've read
that it uses /tmp but when I made /tmp a symlink to a large drive, it
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating the
temporary file, but I know that re-compiling with everything and using
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact config
options, I can see if I can dig them up for you when I get back to the
computer I used to compile it.
I would be interested in the configure options, since I compile from
source on our Solaris server. The windows one is stock apache 2.2.9
and from the svn .zip file. I was planning on upgrading to svn 1.5.2,
to see if that made and difference from 1.5.1, but haven't had the time yet.
Since the 413 error is from apache, I would assume the apache/apr
options are the most important...
Kevin R.
--Adam
I upgraded to apache httpd 2.2.9, apr-1.3.3 and apr-util-1.3.4
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Invalid
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could not
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it was
taking effect, and that resulted in a much quicker failure (about 17
seconds instead of 8minutes). So I know the directive is being set to
0 properly.
I tried it from the server itself and connected to https://localhost
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server side. I
was using subversion 1.5.1 on both machines, but they were compiled
with different options. I'll look into it tonight when I get back
from work.
I can reproduce this with apache 2.2.8/svn 1.5.1 on both windows and
solaris as servers using both TortoiseSVN 1.5.2 and the svn 1.5.1
command line on windows using a single 4G file.
2G files worked fine in the past with svn 1.4.6. Not sure I ever
tried one quite this large before.
The thing I noticed is the client is thrashing the disk for about 8
minutes and it never seems to send much network traffic during
the time before it fails...
Kevin R.
--Adam
I'm using subversion 1.5.1 and am unable to upload large files (the
one I'm trying to upload is over 4GB). It looks like subversion
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to import
the file.
I was looking through the archives and it looks like the only
solution
http://subversion.tigris.org/servlets/ReadMsg?listName=users&msgNo=73988
According to the manual, the solution is to use the
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Does subversion support files over 2GB? If so, can someone point me
in the right direction on what I'm doing wrong in my config?
LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam
---------------------------------------------------------------------
Adam Nichols
2008-09-22 06:51:03 UTC
Permalink
Correction:
subversion DOES need neon support to be able to upload huge (4GB+)
files to an SVN.

I pulled the settings below off the wrong machine. However, I ran
into the same problem when connecting to the same server from a
different client machine. The problem existed with Berkey DB, apr and
apr-util support enabled. Then I included support for neon 0.28.35,
it worked like a champ.
When I tried including neon support with version 0.25.5 did not help.

So that should narrow it down for the other 2 people on Earth who
actually run into this problems because they feel the need to store
ridiculously large files in an svn repository.

--Adam
The error message is from apache, but if the client is sending the
wrong size in the header, it would cause the error by no fault of
apache.
Berkley DB 4.4.20
I didn't write down the options I used here, but I believe the only
option I gave this was --prefix=/usr
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc --with-gnu-ld
apr-util-1.3.4
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc
--with-gnu-ld --with-apr=/usr --with-berkeley-db=/usr
neon-0.28.3
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --with-ssl --with-zlib --enable-shared
subversion-1.5.1
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --without-berkeley-db --without-zlib --without-jdk
--without-jikes --without-swig --without-junit
httpd-2.2.9
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--enable-ssl --enable-dav --enable-http --enable-so --enable-rewrite
To preempt the question... I'm not sure why I omitted BerkleyDB
support in subversion but included it in apr-util. At any rate, these
options worked for me on Linux 2.6.15.5; good luck with Solaris.
--Adam
Post by k***@rockwellcollins.com
It's apparently doing a diff which is what all that hard drive and CPU
activity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've read
that it uses /tmp but when I made /tmp a symlink to a large drive, it
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating the
temporary file, but I know that re-compiling with everything and using
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact config
options, I can see if I can dig them up for you when I get back to the
computer I used to compile it.
I would be interested in the configure options, since I compile from
source on our Solaris server. The windows one is stock apache 2.2.9
and from the svn .zip file. I was planning on upgrading to svn 1.5.2,
to see if that made and difference from 1.5.1, but haven't had the time yet.
Since the 413 error is from apache, I would assume the apache/apr
options are the most important...
Kevin R.
--Adam
I upgraded to apache httpd 2.2.9, apr-1.3.3 and apr-util-1.3.4
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Invalid
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could not
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it was
taking effect, and that resulted in a much quicker failure (about 17
seconds instead of 8minutes). So I know the directive is being set to
0 properly.
I tried it from the server itself and connected to https://localhost
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server side. I
was using subversion 1.5.1 on both machines, but they were compiled
with different options. I'll look into it tonight when I get back
from work.
I can reproduce this with apache 2.2.8/svn 1.5.1 on both windows and
solaris as servers using both TortoiseSVN 1.5.2 and the svn 1.5.1
command line on windows using a single 4G file.
2G files worked fine in the past with svn 1.4.6. Not sure I ever
tried one quite this large before.
The thing I noticed is the client is thrashing the disk for about 8
minutes and it never seems to send much network traffic during
the time before it fails...
Kevin R.
--Adam
I'm using subversion 1.5.1 and am unable to upload large files (the
one I'm trying to upload is over 4GB). It looks like subversion
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to import
the file.
I was looking through the archives and it looks like the only
solution
http://subversion.tigris.org/servlets/ReadMsg?listName=users&msgNo=73988
According to the manual, the solution is to use the
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Does subversion support files over 2GB? If so, can someone point me
in the right direction on what I'm doing wrong in my config?
LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam
---------------------------------------------------------------------
k***@rockwellcollins.com
2008-09-22 22:19:17 UTC
Permalink
FYI, I'm seeing the Subversion clients on Windows
(both the command line and TortoiseSVN) send a negative
Content-Length value for my large 4G test file.

I'm assuming neon is computing this value?

Kevin R.

(Captured using the Fiddler http debug proxy on windows)

PUT
/kmr_test/!svn/wrk/4ab47e83-ef93-4645-a43c-3c7b9e59dcbe/trunk/test/foo2.bar
HTTP/1.1
Host: svn
User-Agent: SVN/1.5.2 (r32768) neon/0.28.3
Connection: TE
TE: trailers
Content-Length: -262360009
X-SVN-Result-Fulltext-MD5: ea709fcf11ca4c18004c52770ee83305
Content-Type: application/vnd.svn-svndiff
DAV: http://subversion.tigris.org/xmlns/dav/svn/depth
DAV: http://subversion.tigris.org/xmlns/dav/svn/mergeinfo
DAV: http://subversion.tigris.org/xmlns/dav/svn/log-revprops
Accept-Encoding: gzip

k***@rockwellcollins.com
2008-09-22 14:11:47 UTC
Permalink
subversion DOES need neon support to be able to upload huge (4GB+)
files to an SVN.
I pulled the settings below off the wrong machine. However, I ran
into the same problem when connecting to the same server from a
different client machine. The problem existed with Berkey DB, apr and
apr-util support enabled. Then I included support for neon 0.28.35,
it worked like a champ.
When I tried including neon support with version 0.25.5 did not help.
So that should narrow it down for the other 2 people on Earth who
actually run into this problems because they feel the need to store
ridiculously large files in an svn repository.
Ugh. Being one of these other 2 people, I still haven't been able to
find the correct recipe to support this over http... I have
found a lot of people complaining about some 2G hard limit in
Apache 2.2 on Windows, but haven't found the exact resolution.

Adam, is the file indeed >2G when compressed and deltified? For me,
it seems to be a problem with the PUT size to apache for a particular
file. This size can be considerably smaller than the actual file
size.

Kevin R.
The error message is from apache, but if the client is sending the
wrong size in the header, it would cause the error by no fault of
apache.
Berkley DB 4.4.20
I didn't write down the options I used here, but I believe the only
option I gave this was --prefix=/usr
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc
--with-gnu-ld
apr-util-1.3.4
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/etc
--with-gnu-ld --with-apr=/usr --with-berkeley-db=/usr
neon-0.28.3
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --with-ssl --with-zlib --enable-shared
subversion-1.5.1
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-gnu-ld --without-berkeley-db --without-zlib --without-jdk
--without-jikes --without-swig --without-junit
httpd-2.2.9
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--enable-ssl --enable-dav --enable-http --enable-so --enable-rewrite
To preempt the question... I'm not sure why I omitted BerkleyDB
support in subversion but included it in apr-util. At any rate, these
options worked for me on Linux 2.6.15.5; good luck with Solaris.
--Adam
Post by k***@rockwellcollins.com
It's apparently doing a diff which is what all that hard drive and
CPU
Post by k***@rockwellcollins.com
activity is about before the network traffic kicks in. I found this
out when I got an error about not having enough disk space. I've
read
Post by k***@rockwellcollins.com
that it uses /tmp but when I made /tmp a symlink to a large drive,
it
Post by k***@rockwellcollins.com
still failed with the same message, so I'm not sure where the
temporary file is stored. Once it gets the difference, I believe it
sends it off for a library like neon to handle.
I can't say for sure if it's a problem with neon or with creating
the
Post by k***@rockwellcollins.com
temporary file, but I know that re-compiling with everything and
using
Post by k***@rockwellcollins.com
the latest versions of all software solved the problem. My
configuration options were just to specify where to find neon, apr,
apr-util, berkleydb, etc. If anyone is interested in the exact
config
Post by k***@rockwellcollins.com
options, I can see if I can dig them up for you when I get back to
the
Post by k***@rockwellcollins.com
computer I used to compile it.
I would be interested in the configure options, since I compile from
source on our Solaris server. The windows one is stock apache 2.2.9
and from the svn .zip file. I was planning on upgrading to svn
1.5.2,
Post by k***@rockwellcollins.com
to see if that made and difference from 1.5.1, but haven't had the
time yet.
Post by k***@rockwellcollins.com
Since the 413 error is from apache, I would assume the apache/apr
options are the most important...
Kevin R.
--Adam
I upgraded to apache httpd 2.2.9, apr-1.3.3 and apr-util-1.3.4
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187]
Invalid
Post by k***@rockwellcollins.com
Content-Length
[Tue Aug 26 20:28:51 2008] [error] [client 192.168.192.187] Could
not
Post by k***@rockwellcollins.com
get next bucket brigade [500, #0]
I tried changing the LimitXMLRequestBody to 10240 to make sure it
was
Post by k***@rockwellcollins.com
taking effect, and that resulted in a much quicker failure (about
17
Post by k***@rockwellcollins.com
seconds instead of 8minutes). So I know the directive is being
set to
Post by k***@rockwellcollins.com
0 properly.
I tried it from the server itself and connected to
https://localhost
Post by k***@rockwellcollins.com
and it went through with no problem. So it looks like this might
actually be a problem with the client side and not the server
side. I
Post by k***@rockwellcollins.com
was using subversion 1.5.1 on both machines, but they were
compiled
Post by k***@rockwellcollins.com
with different options. I'll look into it tonight when I get
back
Post by k***@rockwellcollins.com
from work.
I can reproduce this with apache 2.2.8/svn 1.5.1 on both windows
and
Post by k***@rockwellcollins.com
solaris as servers using both TortoiseSVN 1.5.2 and the svn 1.5.1
command line on windows using a single 4G file.
2G files worked fine in the past with svn 1.4.6. Not sure I ever
tried one quite this large before.
The thing I noticed is the client is thrashing the disk for about
8
Post by k***@rockwellcollins.com
minutes and it never seems to send much network traffic during
the time before it fails...
Kevin R.
--Adam
I'm using subversion 1.5.1 and am unable to upload large files
(the
Post by k***@rockwellcollins.com
one I'm trying to upload is over 4GB). It looks like
subversion
Post by k***@rockwellcollins.com
should be able to handle this, however I get "svn: PUT of
413 Request Entity Too Large (https://HOSTNAME)" when I try to
import
Post by k***@rockwellcollins.com
the file.
I was looking through the archives and it looks like the only
solution
proposed was about client side certs (which I do not use).
Source:
http://subversion.tigris.org/servlets/ReadMsg?listName=users&msgNo=73988
Post by k***@rockwellcollins.com
According to the manual, the solution is to use the
LimitXMLRequestBody directive.
http://publib.boulder.ibm.com/httpserv/manual60/mod/mod_dav.html
Post by k***@rockwellcollins.com
Does subversion support files over 2GB? If so, can someone
point me
Post by k***@rockwellcollins.com
in the right direction on what I'm doing wrong in my config?
Below are the relevant config options from my apache
LimitRequestBody 0
LimitXMLRequestBody 0
DavLockDB "/var/lock/DavLock"
<Location />
DAV svn
SVNPath /mnt/monster/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svn_htpasswd
Require valid-user
SSLRequireSSL
</Location>
Thanks,
Adam
---------------------------------------------------------------------
Continue reading on narkive:
Loading...