How to make webclient download file again if failed?
up vote
2
down vote
favorite
I'm trying to download a list of links of images to my server (Up to 40 links) using foreach.
In my case sometimes the link exists but I don't know why it's going to catch and cancel the download of the next link. Maybe it needs to wait for a little? because when I debug the app I see that the link was the application skipped and went to catch was available but sometimes it's open after few seconds in my browser so the response time from the server I trying to download sometimes need more time to load and open the link.
string newPath = "~/data/" + model.PostID + "/" + name + "/";
//test1 is a list of links
foreach (var item1 in test1)
HttpWebRequest request = WebRequest.Create(item1) as HttpWebRequest; request.Method = "HEAD";
try
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
var webClient = new WebClient();
string path = newPath + i + ".jpg";
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
string newlinks = "https://example.com/data/" + chapter.PostID + "/" + name + "/" + i + ".jpg";
allimages = allimages + newlinks + ',';
response.Close();
i++;
catch
break;
Now my main goal is to fix this issue but as I saw in debugging:
The Images Links I'm trying to download exists
Sometimes Need More Time to response
So How I can fix this ? when download cancel and a link exists, what I should do?
c# webclient
add a comment |
up vote
2
down vote
favorite
I'm trying to download a list of links of images to my server (Up to 40 links) using foreach.
In my case sometimes the link exists but I don't know why it's going to catch and cancel the download of the next link. Maybe it needs to wait for a little? because when I debug the app I see that the link was the application skipped and went to catch was available but sometimes it's open after few seconds in my browser so the response time from the server I trying to download sometimes need more time to load and open the link.
string newPath = "~/data/" + model.PostID + "/" + name + "/";
//test1 is a list of links
foreach (var item1 in test1)
HttpWebRequest request = WebRequest.Create(item1) as HttpWebRequest; request.Method = "HEAD";
try
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
var webClient = new WebClient();
string path = newPath + i + ".jpg";
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
string newlinks = "https://example.com/data/" + chapter.PostID + "/" + name + "/" + i + ".jpg";
allimages = allimages + newlinks + ',';
response.Close();
i++;
catch
break;
Now my main goal is to fix this issue but as I saw in debugging:
The Images Links I'm trying to download exists
Sometimes Need More Time to response
So How I can fix this ? when download cancel and a link exists, what I should do?
c# webclient
Keep track of failed downloads, and try them again (i.e. an extra loop).
– mjwills
Nov 10 at 11:45
Possible duplicate of How to change the timeout on a .NET WebClient object
– mjwills
Nov 10 at 11:46
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I'm trying to download a list of links of images to my server (Up to 40 links) using foreach.
In my case sometimes the link exists but I don't know why it's going to catch and cancel the download of the next link. Maybe it needs to wait for a little? because when I debug the app I see that the link was the application skipped and went to catch was available but sometimes it's open after few seconds in my browser so the response time from the server I trying to download sometimes need more time to load and open the link.
string newPath = "~/data/" + model.PostID + "/" + name + "/";
//test1 is a list of links
foreach (var item1 in test1)
HttpWebRequest request = WebRequest.Create(item1) as HttpWebRequest; request.Method = "HEAD";
try
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
var webClient = new WebClient();
string path = newPath + i + ".jpg";
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
string newlinks = "https://example.com/data/" + chapter.PostID + "/" + name + "/" + i + ".jpg";
allimages = allimages + newlinks + ',';
response.Close();
i++;
catch
break;
Now my main goal is to fix this issue but as I saw in debugging:
The Images Links I'm trying to download exists
Sometimes Need More Time to response
So How I can fix this ? when download cancel and a link exists, what I should do?
c# webclient
I'm trying to download a list of links of images to my server (Up to 40 links) using foreach.
In my case sometimes the link exists but I don't know why it's going to catch and cancel the download of the next link. Maybe it needs to wait for a little? because when I debug the app I see that the link was the application skipped and went to catch was available but sometimes it's open after few seconds in my browser so the response time from the server I trying to download sometimes need more time to load and open the link.
string newPath = "~/data/" + model.PostID + "/" + name + "/";
//test1 is a list of links
foreach (var item1 in test1)
HttpWebRequest request = WebRequest.Create(item1) as HttpWebRequest; request.Method = "HEAD";
try
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
var webClient = new WebClient();
string path = newPath + i + ".jpg";
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
string newlinks = "https://example.com/data/" + chapter.PostID + "/" + name + "/" + i + ".jpg";
allimages = allimages + newlinks + ',';
response.Close();
i++;
catch
break;
Now my main goal is to fix this issue but as I saw in debugging:
The Images Links I'm trying to download exists
Sometimes Need More Time to response
So How I can fix this ? when download cancel and a link exists, what I should do?
c# webclient
c# webclient
edited Nov 10 at 17:36
Gihan Saranga Siriwardhana
2749
2749
asked Nov 10 at 11:42
Dummies EBooks
556
556
Keep track of failed downloads, and try them again (i.e. an extra loop).
– mjwills
Nov 10 at 11:45
Possible duplicate of How to change the timeout on a .NET WebClient object
– mjwills
Nov 10 at 11:46
add a comment |
Keep track of failed downloads, and try them again (i.e. an extra loop).
– mjwills
Nov 10 at 11:45
Possible duplicate of How to change the timeout on a .NET WebClient object
– mjwills
Nov 10 at 11:46
Keep track of failed downloads, and try them again (i.e. an extra loop).
– mjwills
Nov 10 at 11:45
Keep track of failed downloads, and try them again (i.e. an extra loop).
– mjwills
Nov 10 at 11:45
Possible duplicate of How to change the timeout on a .NET WebClient object
– mjwills
Nov 10 at 11:46
Possible duplicate of How to change the timeout on a .NET WebClient object
– mjwills
Nov 10 at 11:46
add a comment |
3 Answers
3
active
oldest
votes
up vote
0
down vote
you can use this example:
class WebClientUtility : WebClient
public int Timeout get; set;
public WebClientUtility() : this(60000)
public WebClientUtility(int timeout)
this.Timeout = timeout;
protected override WebRequest GetWebRequest(Uri address)
var request = base.GetWebRequest(address);
if (request != null)
request.Timeout = Timeout;
return request;
//
public class DownloadHelper : IDisposable
private WebClientUtility _webClient;
private string _downloadUrl;
private string _savePath;
private int _retryCount;
public DownloadHelper(string downloadUrl, string savePath)
_savePath = savePath;
_downloadUrl = downloadUrl;
_webClient = new WebClientUtility();
_webClient.DownloadFileCompleted += ClientOnDownloadFileCompleted;
public void StartDownload()
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
private void ClientOnDownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
if (e.Error != null)
_retryCount++;
if (_retryCount < 3)
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
else
Console.WriteLine(e.Error.Message);
else
_retryCount = 0;
Console.WriteLine($"successfully download: # _downloadUrl to # _savePath");
public void Dispose()
_webClient.Dispose();
//
class Program
private static void Main(string args)
for (int i = 0; i < 100; i++)
var downloadUrl = $@"https://example.com/mag-i.pdf";
var savePath = $@"D:DownloadFileFileNamei.pdf";
DownloadHelper downloadHelper = new DownloadHelper(downloadUrl, savePath);
downloadHelper.StartDownload();
Console.ReadLine();
to fix timeout problem you can create a derived class and set the timeout property of the base WebRequest class and
for retry you can use the DownloadFileCompleted event of the WebClient and implement your retry pattern there
New contributor
add a comment |
up vote
0
down vote
You're using the async version of 'DownloadFileAsync'
. However you're not awaiting
the call, that leaves a mess with unpredicted behaviour.
Make your method async
and then use this:
await webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
add a comment |
up vote
0
down vote
This Solved my case:
await Task.Run(() =>
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
);
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
you can use this example:
class WebClientUtility : WebClient
public int Timeout get; set;
public WebClientUtility() : this(60000)
public WebClientUtility(int timeout)
this.Timeout = timeout;
protected override WebRequest GetWebRequest(Uri address)
var request = base.GetWebRequest(address);
if (request != null)
request.Timeout = Timeout;
return request;
//
public class DownloadHelper : IDisposable
private WebClientUtility _webClient;
private string _downloadUrl;
private string _savePath;
private int _retryCount;
public DownloadHelper(string downloadUrl, string savePath)
_savePath = savePath;
_downloadUrl = downloadUrl;
_webClient = new WebClientUtility();
_webClient.DownloadFileCompleted += ClientOnDownloadFileCompleted;
public void StartDownload()
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
private void ClientOnDownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
if (e.Error != null)
_retryCount++;
if (_retryCount < 3)
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
else
Console.WriteLine(e.Error.Message);
else
_retryCount = 0;
Console.WriteLine($"successfully download: # _downloadUrl to # _savePath");
public void Dispose()
_webClient.Dispose();
//
class Program
private static void Main(string args)
for (int i = 0; i < 100; i++)
var downloadUrl = $@"https://example.com/mag-i.pdf";
var savePath = $@"D:DownloadFileFileNamei.pdf";
DownloadHelper downloadHelper = new DownloadHelper(downloadUrl, savePath);
downloadHelper.StartDownload();
Console.ReadLine();
to fix timeout problem you can create a derived class and set the timeout property of the base WebRequest class and
for retry you can use the DownloadFileCompleted event of the WebClient and implement your retry pattern there
New contributor
add a comment |
up vote
0
down vote
you can use this example:
class WebClientUtility : WebClient
public int Timeout get; set;
public WebClientUtility() : this(60000)
public WebClientUtility(int timeout)
this.Timeout = timeout;
protected override WebRequest GetWebRequest(Uri address)
var request = base.GetWebRequest(address);
if (request != null)
request.Timeout = Timeout;
return request;
//
public class DownloadHelper : IDisposable
private WebClientUtility _webClient;
private string _downloadUrl;
private string _savePath;
private int _retryCount;
public DownloadHelper(string downloadUrl, string savePath)
_savePath = savePath;
_downloadUrl = downloadUrl;
_webClient = new WebClientUtility();
_webClient.DownloadFileCompleted += ClientOnDownloadFileCompleted;
public void StartDownload()
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
private void ClientOnDownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
if (e.Error != null)
_retryCount++;
if (_retryCount < 3)
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
else
Console.WriteLine(e.Error.Message);
else
_retryCount = 0;
Console.WriteLine($"successfully download: # _downloadUrl to # _savePath");
public void Dispose()
_webClient.Dispose();
//
class Program
private static void Main(string args)
for (int i = 0; i < 100; i++)
var downloadUrl = $@"https://example.com/mag-i.pdf";
var savePath = $@"D:DownloadFileFileNamei.pdf";
DownloadHelper downloadHelper = new DownloadHelper(downloadUrl, savePath);
downloadHelper.StartDownload();
Console.ReadLine();
to fix timeout problem you can create a derived class and set the timeout property of the base WebRequest class and
for retry you can use the DownloadFileCompleted event of the WebClient and implement your retry pattern there
New contributor
add a comment |
up vote
0
down vote
up vote
0
down vote
you can use this example:
class WebClientUtility : WebClient
public int Timeout get; set;
public WebClientUtility() : this(60000)
public WebClientUtility(int timeout)
this.Timeout = timeout;
protected override WebRequest GetWebRequest(Uri address)
var request = base.GetWebRequest(address);
if (request != null)
request.Timeout = Timeout;
return request;
//
public class DownloadHelper : IDisposable
private WebClientUtility _webClient;
private string _downloadUrl;
private string _savePath;
private int _retryCount;
public DownloadHelper(string downloadUrl, string savePath)
_savePath = savePath;
_downloadUrl = downloadUrl;
_webClient = new WebClientUtility();
_webClient.DownloadFileCompleted += ClientOnDownloadFileCompleted;
public void StartDownload()
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
private void ClientOnDownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
if (e.Error != null)
_retryCount++;
if (_retryCount < 3)
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
else
Console.WriteLine(e.Error.Message);
else
_retryCount = 0;
Console.WriteLine($"successfully download: # _downloadUrl to # _savePath");
public void Dispose()
_webClient.Dispose();
//
class Program
private static void Main(string args)
for (int i = 0; i < 100; i++)
var downloadUrl = $@"https://example.com/mag-i.pdf";
var savePath = $@"D:DownloadFileFileNamei.pdf";
DownloadHelper downloadHelper = new DownloadHelper(downloadUrl, savePath);
downloadHelper.StartDownload();
Console.ReadLine();
to fix timeout problem you can create a derived class and set the timeout property of the base WebRequest class and
for retry you can use the DownloadFileCompleted event of the WebClient and implement your retry pattern there
New contributor
you can use this example:
class WebClientUtility : WebClient
public int Timeout get; set;
public WebClientUtility() : this(60000)
public WebClientUtility(int timeout)
this.Timeout = timeout;
protected override WebRequest GetWebRequest(Uri address)
var request = base.GetWebRequest(address);
if (request != null)
request.Timeout = Timeout;
return request;
//
public class DownloadHelper : IDisposable
private WebClientUtility _webClient;
private string _downloadUrl;
private string _savePath;
private int _retryCount;
public DownloadHelper(string downloadUrl, string savePath)
_savePath = savePath;
_downloadUrl = downloadUrl;
_webClient = new WebClientUtility();
_webClient.DownloadFileCompleted += ClientOnDownloadFileCompleted;
public void StartDownload()
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
private void ClientOnDownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
if (e.Error != null)
_retryCount++;
if (_retryCount < 3)
_webClient.DownloadFileAsync(new Uri(_downloadUrl), _savePath);
else
Console.WriteLine(e.Error.Message);
else
_retryCount = 0;
Console.WriteLine($"successfully download: # _downloadUrl to # _savePath");
public void Dispose()
_webClient.Dispose();
//
class Program
private static void Main(string args)
for (int i = 0; i < 100; i++)
var downloadUrl = $@"https://example.com/mag-i.pdf";
var savePath = $@"D:DownloadFileFileNamei.pdf";
DownloadHelper downloadHelper = new DownloadHelper(downloadUrl, savePath);
downloadHelper.StartDownload();
Console.ReadLine();
to fix timeout problem you can create a derived class and set the timeout property of the base WebRequest class and
for retry you can use the DownloadFileCompleted event of the WebClient and implement your retry pattern there
New contributor
New contributor
answered Nov 10 at 14:51
yousef
113
113
New contributor
New contributor
add a comment |
add a comment |
up vote
0
down vote
You're using the async version of 'DownloadFileAsync'
. However you're not awaiting
the call, that leaves a mess with unpredicted behaviour.
Make your method async
and then use this:
await webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
add a comment |
up vote
0
down vote
You're using the async version of 'DownloadFileAsync'
. However you're not awaiting
the call, that leaves a mess with unpredicted behaviour.
Make your method async
and then use this:
await webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
add a comment |
up vote
0
down vote
up vote
0
down vote
You're using the async version of 'DownloadFileAsync'
. However you're not awaiting
the call, that leaves a mess with unpredicted behaviour.
Make your method async
and then use this:
await webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
You're using the async version of 'DownloadFileAsync'
. However you're not awaiting
the call, that leaves a mess with unpredicted behaviour.
Make your method async
and then use this:
await webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
answered Nov 10 at 17:49
Poul Bak
4,8173732
4,8173732
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
add a comment |
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
My Method Already async
– Dummies EBooks
Nov 10 at 18:30
add a comment |
up vote
0
down vote
This Solved my case:
await Task.Run(() =>
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
);
add a comment |
up vote
0
down vote
This Solved my case:
await Task.Run(() =>
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
);
add a comment |
up vote
0
down vote
up vote
0
down vote
This Solved my case:
await Task.Run(() =>
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
);
This Solved my case:
await Task.Run(() =>
webClient.DownloadFileAsync(new Uri(item1), Server.MapPath(path));
);
answered Nov 10 at 18:32
Dummies EBooks
556
556
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53238584%2fhow-to-make-webclient-download-file-again-if-failed%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Keep track of failed downloads, and try them again (i.e. an extra loop).
– mjwills
Nov 10 at 11:45
Possible duplicate of How to change the timeout on a .NET WebClient object
– mjwills
Nov 10 at 11:46