282

I'm facing this error when I try to clone a repository from GitLab (GitLab 6.6.2 4ef8369):

remote: Counting objects: 66352, done.
remote: Compressing objects: 100% (10417/10417), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The clone is then aborted. How can I avoid this?

1
  • In our case, we are only allowed to use https to fetch data from the vendor. We tried --depth=1 and all the configuration options and none of them worked. The vendor gave us a new https url and it finally worked. We have no idea what they did under the hood. Commented Jul 16, 2024 at 8:07

30 Answers 30

425

It happens more often than not, I am on a slow internet connection and I have to clone a decently huge git repository. The most common issue is that the connection closes and the whole clone is cancelled.

Cloning into 'large-repository'...
remote: Counting objects: 20248, done.
remote: Compressing objects: 100% (10204/10204), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining 
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

After a lot of trial and errors and a lot of “remote end hung up unexpectedly” I have a way that works for me. The idea is to do a shallow clone first and then update the repository with its history.

$ git clone http://github.com/large-repository --depth 1
$ cd large-repository
$ git fetch --unshallow
Sign up to request clarification or add additional context in comments.

11 Comments

This is the only answer that describes a workaround for the problem without switching to SSH. This worked for me, thanks!
The key here is --depth 1 and --unshallow. This also works for fetching an existing repo on slow connection: git fetch --depth 1 then git fetch --unshallow.
Now, the git fetch --unshallow command give RPC failed; error
Didn't work for me. Failed on the git fetch --unshallow. Guess my repo is too big even for this approach. Only SSH worked.
If git fetch --unshallow still reports errors, you can use git fetch --depth=100 and then git fetch --depth=200 and then git fetch --depth=300 and so on to fetch repo incrementally. This way works for Linux kernel repo, which is extremely large.
|
140

After few days, today I just resolved this problem. Generate ssh key, follow this article:

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Declare it to

  1. Git provider (GitLab what I am using, GitHub).
  2. Add this to local identity.

Then clone by command:

git clone [email protected]:my_group/my_repository.git

And no error happen.

The above problem

error: RPC failed; curl 18 transfer closed with outstanding read data remaining

because have error when clone by HTTP protocol (curl command).

And, you should increment buffer size:

git config --global http.postBuffer 524288000

10 Comments

Change from HTTP to SSH work for me. Config http.postBuffer didn't work.
Changing http.postBuffer worked for me - thanks!
worked for me too for pulling a large solution via a slow vpn connection
Beware: I experienced several issues with npm publish when raising the postBuffer. When I set it down to 50000000, issues were gone. The default value is 1000000, by the way.
changing http.postBuffer 524288000 worked for me.Thank you
|
64

you need to turn off the compression:

git config --global core.compression 0

then you need to use shallow clone

git clone --depth=1 <url>

then most important step is to cd into your cloned project

cd <shallow cloned project dir>

now deopen the clone,step by step

git fetch --depth=N, with increasing N

eg.

git fetch --depth=4

then,

git fetch --depth=100

then,

git fetch --depth=500

you can choose how many steps you want by replacing this N,

and finally download all of the remaining revisions using,

git fetch --unshallow 

upvote if it helps you :)

5 Comments

This is the only option that worked for me. On my case error was happening on: git clone --depth=1 <url> However, as per your instruction, I've executed first: git config --global core.compression 0 Then all following steps, and everything worked great! PS: I have good internet connection, just today is behaving weirdly. Thank you!
Can you detail what does disabling compression help accomplish?
@Slim Here what we are doing is disabling the default behavior of compressing the full object and then fetch. instead we are fetching without compressing which allows us to fetch step by step by specifying the depth.
Is there a way to find out how much the depth is? Because it keeps download even if the depth is enough.
This realy works, when nothing works. Thanks
29

When I tried cloning from the remote, got the same issue repeatedly:

remote: Counting objects: 182, done.
remote: Compressing objects: 100% (149/149), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Finally this worked for me:

git clone https://[email protected]/repositoryName.git --depth 1

3 Comments

what --depth 1 does
If the source repository is complete, convert a shallow repository to a complete one, removing all the limitations imposed by shallow repositories. If the source repository is shallow, fetch as much as possible so that the current repository has the same history as the source repository.
BUT i don't want to clone, I want to push . How can i do it with depth
15

Simple Solution: Rather then cloning via https, clone it via ssh.

For example:

git clone https://github.com/vaibhavjain2/xxx.git - Avoid
git clone [email protected]:vaibhavjain2/xxx.git - Correct

2 Comments

Yes. I am windows user.
you can't clone via ssh if you don't have permissions. For that reason I don't think this can be the general solution. Perhaps a workaround
14

Usually it happen because of one of the below reasone:

  1. Slow Internet.
  • Switching to LAN cable with stable network connection helps in many cases. Avoid doing any parallel network intensive task while you are fetching.
  1. Small TCP/IP connection time out on Server side from where you are trying to fetch.
  • Not much you can do about. All you can do is request your System Admin or CI/CD Team responsible to increaseTCP/IP Timeout and wait.
  1. Heavy Load on Server.
  • Due to heavy server load during work hour downloading a large file can fail constantly.Leave your machine after starting download for night.
  1. Small HTTPS Buffer on Client machine.
  • Increasing buffer size for post and request might help but not guaranteed

git config --global http.postBuffer 524288000

git config --global http.maxRequestBuffer 524288000

git config --global core.compression 0

1 Comment

This is a perfect solution, you saved me
8

Network connection problems.
Maybe due to the persistent connection timeout.
The best way is to change to another network.

1 Comment

Changed the wifi for a faster internet connection then it worked, thanks for saving my time.
7

These steps worked for me:using git:// instead of https://

2 Comments

actually, this answer is more specific than next ones in this thread ..
It depends on the repo you're cloning what protocol or protocols are offered. Some do not offer the git protocol. I'm facing this problem with the postgres repo git.postgresql.org/git/postgresql.git and they do not. In fact even --depth 2 fails for me so I seem to be stuck :cry:
6

As above mentioned, first of all run your git command from bash adding the enhanced log directives in the beginning: GIT_TRACE=1 GIT_CURL_VERBOSE=1 git ...

e.g. GIT_CURL_VERBOSE=1 GIT_TRACE=1 git -c diff.mnemonicprefix=false -c core.quotepath=false fetch origin This will show you detailed error information.

Comments

4

With me this problem occurred because the proxy configuration. I added the ip git server in the proxy exception. The git server was local, but the no_proxy environment variable was not set correctly.

I used this command to identify the problem:

#Linux:
export GIT_TRACE_PACKET=1
export GIT_TRACE=1
export GIT_CURL_VERBOSE=1

#Windows
set GIT_TRACE_PACKET=1
set GIT_TRACE=1
set GIT_CURL_VERBOSE=1

In return there was the "Proxy-Authorization" as the git server was spot should not go through the proxy. But the real problem was the size of the files defined by the proxy rules

Comments

4

For me, the issue was that the connection closes before the whole clone complete. I used ethernet instead of wifi connection. Then it solves for me

Comments

4

This error seems to happen more commonly with a slow, or troubled internet connection. I have connected with good internet speed then it is worked perfectly.

Comments

4

For me what worked is, as this error may occur for memory requirement of git. I have added these lines to my global git configuration file .gitconfig which is present in $USER_HOME i.e C:\Users\<USER_NAME>\.gitconfig

[core] 
packedGitLimit = 512m 
packedGitWindowSize = 512m 
[pack] 
deltaCacheSize = 2047m 
packSizeLimit = 2047m 
windowMemory = 2047m

Comments

2

This problem arrive when you are proxy issue or slow network. You can go with the depth solution or

git fetch --all  or git clone 

    

If this give error of curl 56 Recv failure then download the file via zip or spicify the name of branch instead of --all

git fetch origin BranchName 

1 Comment

Using git fetch origin BranchName I was able to continue an interrupted git clone. Thank you.
2

This problem usually occurs while cloning large repos. If git clone http://github.com/large-repository --depth 1 does not work on windows cmd. Try running the command in windows powershell.

Comments

2

I am facing this problem also. resolve it. The problem is the slow internet connection. Please check your internet connection nothing else. I have connected with good internet speed then it is worked perfectly. hope it helped you.

Comments

2

This may be a problem with a very slow internet connection. if you cannot change the network, try to change the waiting time doing:

  • git config --global http.lowSpeedLimit 0
  • git config --global http.lowSpeedTime 300. Here I set the timeout to 300, you can adjust

Comments

1

Tried all of the answers on here. I was trying to add cocoapods onto my machine.

I didn't have an SSH key so thanks @Do Nhu Vy

https://stackoverflow.com/a/38703069/2481602

And finally used

git clone https://git.coding.net/CocoaPods/Specs.git ~/.cocoapods/repos/master

to finally fix the issue found https://stackoverflow.com/a/50959034/2481602

Comments

1

can be two reason

  1. Internet is slow (this was in my case)
  2. buffer size is less,in this case you can run command git config --global http.postBuffer 524288000

Comments

1

This problem is solved 100%. I was facing this problem , my project manager change the repo name but i was using old repo name.

Engineer@-Engi64 /g/xampp/htdocs/hospitality
$ git clone https://git-codecommit.us-east-2.amazonaws.com/v1/repo/cms
Cloning into 'cms'...
remote: Counting objects: 10647, done.
error: RPC failed; curl 56 OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 10054
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

How i solved this problem. Repo link was not valid so that's why i am facing this issue. Please check your repo link before cloning.

Comments

0

Changing git clone protocol to try.

for example, this error happened when "git clone https://xxxxxxxxxxxxxxx"

you can try with "git clone git://xxxxxxxxxxxxxx", maybe ok then.

Comments

0

I got the same issue while pushing some code to Github.

I tried git config --global http.postBuffer 524288000 but It didn't work for me.

Reason

It was because your commit history and/or any file(s) size is bigger.

My Case

In my case, package-lock.json was causing the problem. It was 1500+KB in size and 33K lines of code.

How I solved it?

  1. I commit and pushed everything without package-lock.json
  2. Copy the content of package-lock.json.
  3. Created a new file with the name of package-lock.json from the GitHub repo page.
  4. Paste the content of package-lock.json and commit.
  5. git pull on local.

And Done.

Tips

  • Maintain each commit size smaller
  • Push frequently
  • Use a good internet connection

I hope it helped you.

Comments

0

None of the recommended solutions worked for me due to problems with my network.

I managed to bypass it by

  1. creating an empty local git project, then
  2. setting the remote URL to point to the origin URL, and finally
  3. doing a shallow fetch git fetch --depth 1 to get the branch tips.
repo_url=https://github.com/owner/repo-name
mkdir repo-name && cd repo-name && git init
git remote add origin $repo_url
git fetch --depth 1

This essentially fetches only the latest commit from every branch, which is what I needed. I needed to get my hands on the source code first, and not necessarily the entire commit history.

POSSIBLE EXPLANATION BELOW.

Apparently, this might be caused by the fact that a fresh clone does not have a git DB yet at the start (inside the .git directory), and thus some work has to be done to instantiate the database, before the handler can proceed to transporting the data. This is a complex process which is dependent on many factors such as network protocol and security configs.

On the other hand, with a local project already initialized with a .git directory, fetch only needs to negotiate the tips of the commit tree.

You can read more about the chicken-and-egg problem of implementing git clone from the Git source code here.

Comments

0

I have been encoutering the same problem but as a part of CMake FetchContent module, which does not allow granularly setting clone options such as --depth and I was not even sure it was picking the right config values such as proposed above (http.postMaxBuffer, pack.depth etc). None of the suggestions from the web worked for me, I had like a dozen different options set in the config.

Solution for me was to use a VPN or just switch to another network - maybe it was an ISP problem, as the nature of the problem is very random it seems.

Comments

0

If anyone facing such issue again in future. Setting following solved for me.

git config --global url."[email protected]:".insteadOf "https://github.com/"

It replaces https with git.

Comments

-1

I ran into the same issue with super slow network speeds causing all sorts of errors when fetching Pods (stuff like RPC failed; curl ... transfer closed with outstanding read data remaining). It was driving me crazy, and I couldn't just switch networks.

What finally worked for me was using this command:

pod install --no-repo-update

I basically ran pod install --no-repo-update over and over again. The idea is that every time it fails halfway through, some of the Pods have still been successfully installed. So when you run it again, CocoaPods doesn't re-download everything from scratch—it just picks up where it left off. Eventually, after a few retries, I ended up with all Pods installed.

To break it down:

  1. Start by running pod install --no-repo-update.
  2. If it fails with a network error, run the same command again.
  3. Each time, more Pods get installed and cached locally.
  4. After a few retries, you have all your Pods, and the install is "complete."

This trick saved me from the frustration of a never-ending loop of failed installations. Whether you're working with React Native, Expo, Swift, or any iOS project that relies on CocoaPods, this approach can help you inch your way towards a fully installed set of Pods, even on a shaky connection.

Comments

-2

I was able to clone the repo with GitHub Desktop

1 Comment

This does not provide an answer to the question. Once you have sufficient reputation you will be able to comment on any post; instead, provide answers that don't require clarification from the asker. - From Review
-2

git config

[core]
    autocrlf = input
    compression = 0
[remote "origin"]
    proxy = 127.0.0.1:1086
[http]
    version = HTTP/1.1
[https]
    postBuffer = 524288000

retry.sh

set -x
while true
do
  git clone xxxxx
  if [ $? -eq 0 ]; then
    break
  fi
done

1 Comment

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
-9

These steps are working for me:

cd [dir]
git init
git clone [your Repository Url]

I hope that works for you too.

Comments

-17

try this

$ git config --global user.name "John Doe"
$ git config --global user.email [email protected]

https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup

this is work for me.. capture.png

1 Comment

The bug is sporadic due to an unreliable network. The solution presented here didn't actually fix the problem. The network just happened to be more reliable at the moment you tried cloning again.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.