MemoryStream Capacity Problem. Is there another stream class like memorystream, not memorytributary?










0














I'm trying to serialize a big data. I'm getting an error OutOfMemoryException when serialize data. I did some researches. I found some information that it is said that there is an error about MemoryStream. Because memorystream supports 2GB data size. It has Capacity as Integer. I found another class can be used instead of memory stream. It is called MemoryTributary. I tried this class. But it gave me really bad performance. Can I find a framework or another thing like memorystream for big data ?



I would like to share what I did shortly :




  • Get Data From SQLServer

  • Convert Data To A Structure

  • Serialize Data

  • Compress Data By Using LZ4 Algorithm

  • Upload Data To Azure


  • Get Data From Azure

  • Decompress Data By Using LZ4 Algorithm

  • Deserialize Data

  • Use data for operation


I have some codes like below :



 public static Stream ConvertObjectToStream<T>(T obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static Stream ConvertObjectToStream(object obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, obj.GetType(), mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static T ConvertStreamToObject<T>(Stream stream)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
var o = boisSerializer.Deserialize<T>(stream);
return o;


public static object ConvertStreamToObject(Stream stream, Type type)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
object o = boisSerializer.Deserialize(stream, type);
return o;


public static Stream CompressStream(this Stream stream, LZ4Level lz4Level = LZ4Level.L00_FAST)

stream.Position = 0;
var ms = new MemoryStream();
var settings = new LZ4EncoderSettings CompressionLevel = lz4Level ;
LZ4EncoderStream target = LZ4Stream.Encode(ms, settings);
stream.CopyTo(target);
target.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());


public static Stream DecompressStream(this Stream stream)

var ms = new MemoryStream();
stream.Position = 0;
LZ4DecoderStream source = LZ4Stream.Decode(stream);
source.CopyTo(ms);
source.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());



public async Task UploadValueToContainer<T>(string containerName, string blobName, T obj)

Stream stream = Core.Tools.Helpers.ConvertObjectToStream(obj).CompressStream();
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
await blobData.UploadFromStreamAsync(stream);


public async Task<T> DownloadValueFromContainer<T>(string containerName, string blobName) where T : new()

CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
if (!blobData.Exists())

return new T();


try

var ms = new MemoryStream();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());

catch (OutOfMemoryException)

var ms = new MemoryTributary();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());











share|improve this question

















  • 1




    also if you need to, use a memory mapped file, and a stream that can support it
    – TheGeneral
    Nov 12 at 7:13










  • I want to store 2GB data for serialization. Sorry, I didn't know xy problem. I said my solution, because I don't know that I'm not true or false. @mjwills
    – sinanakyazici
    Nov 12 at 7:15







  • 1




    @TheGeneral I looked at memory mapped file now. It sounds great. It can be worked. I will try this. Thanks.
    – sinanakyazici
    Nov 12 at 7:18











  • Your main issue is that you are using your streams not as streams, but as arrays. From what I can see you don't even need a memory stream. If you are creating you serialized data why not directly serialize into the compression stream, and then feed the compression stream into the network stream. If you are reading your data you should feed the input network stream, into the decompression stream and this one into your deserialization. This way regardless how large your data is you would use a constant amount of memory of some kilobytes or megabytes (depending how large your buffers are).
    – ckuri
    Nov 12 at 8:25











  • For example, your deserialization should be something like this: using (var networkStream = await blobData.DownloadToStreamAsync(ms)) using (var decompressStream = LZ4Stream.Decode(networkStream)) object o = boisSerializer.Deserialize(decompressStream, type); return o; . As you can see there is no need to copy your data around multiple times.
    – ckuri
    Nov 12 at 8:29
















0














I'm trying to serialize a big data. I'm getting an error OutOfMemoryException when serialize data. I did some researches. I found some information that it is said that there is an error about MemoryStream. Because memorystream supports 2GB data size. It has Capacity as Integer. I found another class can be used instead of memory stream. It is called MemoryTributary. I tried this class. But it gave me really bad performance. Can I find a framework or another thing like memorystream for big data ?



I would like to share what I did shortly :




  • Get Data From SQLServer

  • Convert Data To A Structure

  • Serialize Data

  • Compress Data By Using LZ4 Algorithm

  • Upload Data To Azure


  • Get Data From Azure

  • Decompress Data By Using LZ4 Algorithm

  • Deserialize Data

  • Use data for operation


I have some codes like below :



 public static Stream ConvertObjectToStream<T>(T obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static Stream ConvertObjectToStream(object obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, obj.GetType(), mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static T ConvertStreamToObject<T>(Stream stream)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
var o = boisSerializer.Deserialize<T>(stream);
return o;


public static object ConvertStreamToObject(Stream stream, Type type)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
object o = boisSerializer.Deserialize(stream, type);
return o;


public static Stream CompressStream(this Stream stream, LZ4Level lz4Level = LZ4Level.L00_FAST)

stream.Position = 0;
var ms = new MemoryStream();
var settings = new LZ4EncoderSettings CompressionLevel = lz4Level ;
LZ4EncoderStream target = LZ4Stream.Encode(ms, settings);
stream.CopyTo(target);
target.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());


public static Stream DecompressStream(this Stream stream)

var ms = new MemoryStream();
stream.Position = 0;
LZ4DecoderStream source = LZ4Stream.Decode(stream);
source.CopyTo(ms);
source.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());



public async Task UploadValueToContainer<T>(string containerName, string blobName, T obj)

Stream stream = Core.Tools.Helpers.ConvertObjectToStream(obj).CompressStream();
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
await blobData.UploadFromStreamAsync(stream);


public async Task<T> DownloadValueFromContainer<T>(string containerName, string blobName) where T : new()

CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
if (!blobData.Exists())

return new T();


try

var ms = new MemoryStream();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());

catch (OutOfMemoryException)

var ms = new MemoryTributary();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());











share|improve this question

















  • 1




    also if you need to, use a memory mapped file, and a stream that can support it
    – TheGeneral
    Nov 12 at 7:13










  • I want to store 2GB data for serialization. Sorry, I didn't know xy problem. I said my solution, because I don't know that I'm not true or false. @mjwills
    – sinanakyazici
    Nov 12 at 7:15







  • 1




    @TheGeneral I looked at memory mapped file now. It sounds great. It can be worked. I will try this. Thanks.
    – sinanakyazici
    Nov 12 at 7:18











  • Your main issue is that you are using your streams not as streams, but as arrays. From what I can see you don't even need a memory stream. If you are creating you serialized data why not directly serialize into the compression stream, and then feed the compression stream into the network stream. If you are reading your data you should feed the input network stream, into the decompression stream and this one into your deserialization. This way regardless how large your data is you would use a constant amount of memory of some kilobytes or megabytes (depending how large your buffers are).
    – ckuri
    Nov 12 at 8:25











  • For example, your deserialization should be something like this: using (var networkStream = await blobData.DownloadToStreamAsync(ms)) using (var decompressStream = LZ4Stream.Decode(networkStream)) object o = boisSerializer.Deserialize(decompressStream, type); return o; . As you can see there is no need to copy your data around multiple times.
    – ckuri
    Nov 12 at 8:29














0












0








0







I'm trying to serialize a big data. I'm getting an error OutOfMemoryException when serialize data. I did some researches. I found some information that it is said that there is an error about MemoryStream. Because memorystream supports 2GB data size. It has Capacity as Integer. I found another class can be used instead of memory stream. It is called MemoryTributary. I tried this class. But it gave me really bad performance. Can I find a framework or another thing like memorystream for big data ?



I would like to share what I did shortly :




  • Get Data From SQLServer

  • Convert Data To A Structure

  • Serialize Data

  • Compress Data By Using LZ4 Algorithm

  • Upload Data To Azure


  • Get Data From Azure

  • Decompress Data By Using LZ4 Algorithm

  • Deserialize Data

  • Use data for operation


I have some codes like below :



 public static Stream ConvertObjectToStream<T>(T obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static Stream ConvertObjectToStream(object obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, obj.GetType(), mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static T ConvertStreamToObject<T>(Stream stream)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
var o = boisSerializer.Deserialize<T>(stream);
return o;


public static object ConvertStreamToObject(Stream stream, Type type)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
object o = boisSerializer.Deserialize(stream, type);
return o;


public static Stream CompressStream(this Stream stream, LZ4Level lz4Level = LZ4Level.L00_FAST)

stream.Position = 0;
var ms = new MemoryStream();
var settings = new LZ4EncoderSettings CompressionLevel = lz4Level ;
LZ4EncoderStream target = LZ4Stream.Encode(ms, settings);
stream.CopyTo(target);
target.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());


public static Stream DecompressStream(this Stream stream)

var ms = new MemoryStream();
stream.Position = 0;
LZ4DecoderStream source = LZ4Stream.Decode(stream);
source.CopyTo(ms);
source.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());



public async Task UploadValueToContainer<T>(string containerName, string blobName, T obj)

Stream stream = Core.Tools.Helpers.ConvertObjectToStream(obj).CompressStream();
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
await blobData.UploadFromStreamAsync(stream);


public async Task<T> DownloadValueFromContainer<T>(string containerName, string blobName) where T : new()

CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
if (!blobData.Exists())

return new T();


try

var ms = new MemoryStream();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());

catch (OutOfMemoryException)

var ms = new MemoryTributary();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());











share|improve this question













I'm trying to serialize a big data. I'm getting an error OutOfMemoryException when serialize data. I did some researches. I found some information that it is said that there is an error about MemoryStream. Because memorystream supports 2GB data size. It has Capacity as Integer. I found another class can be used instead of memory stream. It is called MemoryTributary. I tried this class. But it gave me really bad performance. Can I find a framework or another thing like memorystream for big data ?



I would like to share what I did shortly :




  • Get Data From SQLServer

  • Convert Data To A Structure

  • Serialize Data

  • Compress Data By Using LZ4 Algorithm

  • Upload Data To Azure


  • Get Data From Azure

  • Decompress Data By Using LZ4 Algorithm

  • Deserialize Data

  • Use data for operation


I have some codes like below :



 public static Stream ConvertObjectToStream<T>(T obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static Stream ConvertObjectToStream(object obj)

if (obj == null) return null;
var boisSerializer = new BoisSerializer();
try

var mem = new MemoryStream();
boisSerializer.Serialize(obj, obj.GetType(), mem);
return new MemoryStream(mem.ToArray());

catch (OutOfMemoryException)

var mem = new MemoryTributary();
boisSerializer.Serialize(obj, mem);
return new MemoryTributary(mem.ToArray());



public static T ConvertStreamToObject<T>(Stream stream)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
var o = boisSerializer.Deserialize<T>(stream);
return o;


public static object ConvertStreamToObject(Stream stream, Type type)

stream.Position = 0;
var boisSerializer = new BoisSerializer();
object o = boisSerializer.Deserialize(stream, type);
return o;


public static Stream CompressStream(this Stream stream, LZ4Level lz4Level = LZ4Level.L00_FAST)

stream.Position = 0;
var ms = new MemoryStream();
var settings = new LZ4EncoderSettings CompressionLevel = lz4Level ;
LZ4EncoderStream target = LZ4Stream.Encode(ms, settings);
stream.CopyTo(target);
target.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());


public static Stream DecompressStream(this Stream stream)

var ms = new MemoryStream();
stream.Position = 0;
LZ4DecoderStream source = LZ4Stream.Decode(stream);
source.CopyTo(ms);
source.Dispose();
ms.Dispose();
return new MemoryStream(ms.ToArray());



public async Task UploadValueToContainer<T>(string containerName, string blobName, T obj)

Stream stream = Core.Tools.Helpers.ConvertObjectToStream(obj).CompressStream();
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
await blobData.UploadFromStreamAsync(stream);


public async Task<T> DownloadValueFromContainer<T>(string containerName, string blobName) where T : new()

CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
if (!container.Exists()) throw new Exception($"containerName Container has not been found.");
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
if (!blobData.Exists())

return new T();


try

var ms = new MemoryStream();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());

catch (OutOfMemoryException)

var ms = new MemoryTributary();
await blobData.DownloadToStreamAsync(ms);
return Core.Tools.Helpers.ConvertStreamToObject<T>(ms.DecompressStream());








c# serialization stream memorystream






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 12 at 7:11









sinanakyazici

2,78242352




2,78242352







  • 1




    also if you need to, use a memory mapped file, and a stream that can support it
    – TheGeneral
    Nov 12 at 7:13










  • I want to store 2GB data for serialization. Sorry, I didn't know xy problem. I said my solution, because I don't know that I'm not true or false. @mjwills
    – sinanakyazici
    Nov 12 at 7:15







  • 1




    @TheGeneral I looked at memory mapped file now. It sounds great. It can be worked. I will try this. Thanks.
    – sinanakyazici
    Nov 12 at 7:18











  • Your main issue is that you are using your streams not as streams, but as arrays. From what I can see you don't even need a memory stream. If you are creating you serialized data why not directly serialize into the compression stream, and then feed the compression stream into the network stream. If you are reading your data you should feed the input network stream, into the decompression stream and this one into your deserialization. This way regardless how large your data is you would use a constant amount of memory of some kilobytes or megabytes (depending how large your buffers are).
    – ckuri
    Nov 12 at 8:25











  • For example, your deserialization should be something like this: using (var networkStream = await blobData.DownloadToStreamAsync(ms)) using (var decompressStream = LZ4Stream.Decode(networkStream)) object o = boisSerializer.Deserialize(decompressStream, type); return o; . As you can see there is no need to copy your data around multiple times.
    – ckuri
    Nov 12 at 8:29













  • 1




    also if you need to, use a memory mapped file, and a stream that can support it
    – TheGeneral
    Nov 12 at 7:13










  • I want to store 2GB data for serialization. Sorry, I didn't know xy problem. I said my solution, because I don't know that I'm not true or false. @mjwills
    – sinanakyazici
    Nov 12 at 7:15







  • 1




    @TheGeneral I looked at memory mapped file now. It sounds great. It can be worked. I will try this. Thanks.
    – sinanakyazici
    Nov 12 at 7:18











  • Your main issue is that you are using your streams not as streams, but as arrays. From what I can see you don't even need a memory stream. If you are creating you serialized data why not directly serialize into the compression stream, and then feed the compression stream into the network stream. If you are reading your data you should feed the input network stream, into the decompression stream and this one into your deserialization. This way regardless how large your data is you would use a constant amount of memory of some kilobytes or megabytes (depending how large your buffers are).
    – ckuri
    Nov 12 at 8:25











  • For example, your deserialization should be something like this: using (var networkStream = await blobData.DownloadToStreamAsync(ms)) using (var decompressStream = LZ4Stream.Decode(networkStream)) object o = boisSerializer.Deserialize(decompressStream, type); return o; . As you can see there is no need to copy your data around multiple times.
    – ckuri
    Nov 12 at 8:29








1




1




also if you need to, use a memory mapped file, and a stream that can support it
– TheGeneral
Nov 12 at 7:13




also if you need to, use a memory mapped file, and a stream that can support it
– TheGeneral
Nov 12 at 7:13












I want to store 2GB data for serialization. Sorry, I didn't know xy problem. I said my solution, because I don't know that I'm not true or false. @mjwills
– sinanakyazici
Nov 12 at 7:15





I want to store 2GB data for serialization. Sorry, I didn't know xy problem. I said my solution, because I don't know that I'm not true or false. @mjwills
– sinanakyazici
Nov 12 at 7:15





1




1




@TheGeneral I looked at memory mapped file now. It sounds great. It can be worked. I will try this. Thanks.
– sinanakyazici
Nov 12 at 7:18





@TheGeneral I looked at memory mapped file now. It sounds great. It can be worked. I will try this. Thanks.
– sinanakyazici
Nov 12 at 7:18













Your main issue is that you are using your streams not as streams, but as arrays. From what I can see you don't even need a memory stream. If you are creating you serialized data why not directly serialize into the compression stream, and then feed the compression stream into the network stream. If you are reading your data you should feed the input network stream, into the decompression stream and this one into your deserialization. This way regardless how large your data is you would use a constant amount of memory of some kilobytes or megabytes (depending how large your buffers are).
– ckuri
Nov 12 at 8:25





Your main issue is that you are using your streams not as streams, but as arrays. From what I can see you don't even need a memory stream. If you are creating you serialized data why not directly serialize into the compression stream, and then feed the compression stream into the network stream. If you are reading your data you should feed the input network stream, into the decompression stream and this one into your deserialization. This way regardless how large your data is you would use a constant amount of memory of some kilobytes or megabytes (depending how large your buffers are).
– ckuri
Nov 12 at 8:25













For example, your deserialization should be something like this: using (var networkStream = await blobData.DownloadToStreamAsync(ms)) using (var decompressStream = LZ4Stream.Decode(networkStream)) object o = boisSerializer.Deserialize(decompressStream, type); return o; . As you can see there is no need to copy your data around multiple times.
– ckuri
Nov 12 at 8:29





For example, your deserialization should be something like this: using (var networkStream = await blobData.DownloadToStreamAsync(ms)) using (var decompressStream = LZ4Stream.Decode(networkStream)) object o = boisSerializer.Deserialize(decompressStream, type); return o; . As you can see there is no need to copy your data around multiple times.
– ckuri
Nov 12 at 8:29


















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53257365%2fmemorystream-capacity-problem-is-there-another-stream-class-like-memorystream%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53257365%2fmemorystream-capacity-problem-is-there-another-stream-class-like-memorystream%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Barbados

How to read a connectionString WITH PROVIDER in .NET Core?

Node.js Script on GitHub Pages or Amazon S3