howdy folks!
i have a little problem, and am looking for suggestions and insights.
we're developing a prototype application for one of our customers; the idea behind the prototype is that one client application will "simulate" many clients and make calls on a web service, using actual implemented web services. we want to see how much stress the server can take, and optimize the code, iis configuration, and so on.
the client generates x threads, and each thread makes y calls to the server. each thread has a cookiecontainer which allows us to make several calls within the same IIS session.
the server gets some data from a SQL server, and returns a dataset. nothing fancy (so far, but it will evolve within a more complex architecture)
so far, everything is working ok, but we're not really satisified with the results. the client cpu, when launching 500 threads, is at 30%, while the server cpus are also at 30% ... (network traffic is about 2% so it's no lan bottleneck)
so this puts everything back in perspective, from the threaded approach, to the cookie container, to the way web services are called, to the choice of web services themselves i need to push the server close to its limit (and it's a big machine, too ... )
a few details :
- VS 2003, .NET framework 1.1
- c# client application
- threads are declared in an array, and each thread is started with the Thread.Start() method
- web proxies are generated with "add web reference ..." from visual studio
- web methods are called synchronously
- logging is done using .NET tracing
- global varables are stored in a "shared memory" (singleton) class.
project is confiential, so i can't post any actual code (sorry) but it must be highly performant, both on client and server. even if it implies more code.
current ideas for changes :
- asynchronous calls to web services ?
- use protection and lock mechanisms for threads ? mutex ?
- remoting ?
- no threading ? maybe the cpu is constantly switching contexts ?
- better way to share globals between threads ? (i need them to display statistical data about execution times)
thanks for any insight and suggestions. i've never had to do any performance-driven apps before so i'm learning a lot of stuff as i go, especially about threading, so there might be some important notions that i'm missing.
have a good day!
huby.
i have a little problem, and am looking for suggestions and insights.
we're developing a prototype application for one of our customers; the idea behind the prototype is that one client application will "simulate" many clients and make calls on a web service, using actual implemented web services. we want to see how much stress the server can take, and optimize the code, iis configuration, and so on.
the client generates x threads, and each thread makes y calls to the server. each thread has a cookiecontainer which allows us to make several calls within the same IIS session.
the server gets some data from a SQL server, and returns a dataset. nothing fancy (so far, but it will evolve within a more complex architecture)
so far, everything is working ok, but we're not really satisified with the results. the client cpu, when launching 500 threads, is at 30%, while the server cpus are also at 30% ... (network traffic is about 2% so it's no lan bottleneck)
so this puts everything back in perspective, from the threaded approach, to the cookie container, to the way web services are called, to the choice of web services themselves i need to push the server close to its limit (and it's a big machine, too ... )
a few details :
- VS 2003, .NET framework 1.1
- c# client application
- threads are declared in an array, and each thread is started with the Thread.Start() method
- web proxies are generated with "add web reference ..." from visual studio
- web methods are called synchronously
- logging is done using .NET tracing
- global varables are stored in a "shared memory" (singleton) class.
project is confiential, so i can't post any actual code (sorry) but it must be highly performant, both on client and server. even if it implies more code.
current ideas for changes :
- asynchronous calls to web services ?
- use protection and lock mechanisms for threads ? mutex ?
- remoting ?
- no threading ? maybe the cpu is constantly switching contexts ?
- better way to share globals between threads ? (i need them to display statistical data about execution times)
thanks for any insight and suggestions. i've never had to do any performance-driven apps before so i'm learning a lot of stuff as i go, especially about threading, so there might be some important notions that i'm missing.
have a good day!
huby.