Nerseus
*Experts*-
Posts
2607 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Events
Articles
Resources
Downloads
Gallery
Everything posted by Nerseus
-
You could probably use the DirectX library if you don't mind having that depending for your application. ie, a user has to have DirectX installed to use your application. I would bet a dollar/pound/etc. that someone has already created a free matrix math library in .NET, if you search google. -ner
-
I like to think that I'm just slower...
-
New forum: Data Structures?
Nerseus replied to coldfusion244's topic in Suggestions, Bugs, and Comments
If you figure out the MP3 format, along with the C# code, I'd suggest posting a sample that could go into the Code Library forum for all to see. If you just have a question, it could go in the forum Derek mentioned. If you have the C++ structure and need help translating to C# or VB.NET, maybe in the language specific forum or even General. -ner -
If you can use an ArrayList: ArrayList al = ArrayList.Repeat(-1, 10); If you're doing this a lot then you want a function: public sealed class ArrayUtil { private ArrayUtil() { } // private constructor - can't create public static int[] CreateArray(int size, int defaultValue) { // Make sure size is valid if (size <= 0) return null; int[] returnArray = new int[size]; for (int i = 0; i < size; i++) returnArray[i] = defaultValue; return returnArray; } public static decimal[] CreateArray(int size, decimal defaultValue) { // Make sure size is valid if (size <= 0) return null; decimal[] returnArray = new decimal[size]; for (int i = 0; i < size; i++) returnArray[i] = defaultValue; return returnArray; } } ... int[] i = ArrayUtil.CreateArray(10, -1); decimal[] d = ArrayUtil.CreateArray(10, -1M); For now, I don't think there's any way to avoid a for loop. If you have a fixed array size, like 4, you could do the following, but I'm guess you won't know the size. int[] i = new int[] { -1, -1, -1, -1 }; -ner
-
I see at least three things that can be made faster: 1. Use a foreach loop. A regular for loop incurrs extra overhead vs. a foreach as .NET will do an array bounds check on each iteration. This is small, but since a foreach is *easier*, might as well use it: Dim s As String ForEach s in FileContentsArray Dim Regexp As Regex = New Regex(SearchExpression.Item(k)) If Regexp.Match(s).Success Then 'add the match to a dataset End If Next 2. Create the RegEx objects once, outside of the loop. Not sure what k is - I'm guessing you loop through regular expressions? Below is a new version that just moves out RegEx. The GC will have to create the object on each iteration and then GC it. That's a tremendous amount of overhead that you can avoid. Dim s As String Dim Regexp As Regex = New Regex(SearchExpression.Item(k)) ForEach s in FileContentsArray If Regexp.Match(s).Success Then 'add the match to a dataset End If Next 3. Put the match into a Match object so you can use it inside the If. This will help if you're testing for Success then getting the Match object into a variable to do something with the matches. You didn't show what you did with the data. Dim s As String Dim Regexp As Regex = New Regex(SearchExpression.Item(k)) Dim m As Match ForEach s in FileContentsArray m = Regexp.Match(s) If m.Success Then 'add the match to a dataset ds.Tables... = match.Match.Groups["Col1"].Value End If Next There could be lots of other optimaztions we'd recommend. The best thing to do is profile the code yourself and determine EXACTLY what the slowest part of the code is. If you don't have a tool, some scattered Debug.Write statements should help. It's always better to analyze first before diving into code changes where you think it's slow - it's just too easy to find out for sure. The first two above are pretty easy to do though, so you might try them first. -ner
-
Dern you, Rick! I had a typo, it's now fixed.
-
1. Not sure which part has the problem? Your code, or your understanding of the code you're watching. 2. The "<DesignerSerializationVisibility" is a "decoration" you can apply. This one goes on a property and helps an IDE like Visual Studio take some actions. For example, by saying "Browsable(False)", you won't see this property in the intellisense dropdown. 3. Setting the public modifier on the Get and Set separately will be supported in .NET 2.0 when it comes out, maybe you saw some early code? -ner
-
Well, I don't expect many posts in this topic, but here we go. Post any computer related jokes here - I'll start! Why are computer programmers so bad at remembering holidays? Because: OCT 31 = DEC 25 If you're outside the US, Halloween is October 31 and Christmas is December 25. Hopefully the rest won't need explaining. -ner
-
Ok, it's been a week and no one has mentioned that fact that someone is asking if they're a "looser" instead of a "loser"? :)
-
Problems Updating/Adding To A Dataset
Nerseus replied to blandie's topic in Database / XML / Reporting
You have to check for Nothing, as you expected. The syntax is: If Row Is Nothing Then The IsNull method checks if the column is null but it only works if you have a row to check. -ner -
That's odd, Denaes. I would think of you have all the credits required for two majors then you GET two majors: Bachelor of Science in Application Development and Bachelor of Science in Database Theory - IF those are two degrees. I would bet there's just one: Bachelor of Science in Computer Science. At my university we had "options", so you could get a BS in CompSci, Science Option or BS in CompSci, Math option. You can also get a Major and a minor, in some areas. The credits I needed for a BS in CompSci got a math minor for example. -ner
-
@TheWizardofInt: That is absolutely right. SQL Server has a piece called an optimizer, which analyses many things to determine which index(es) to use. I don't claim to be the most knowledable in that area, but I recently got a chance to work with one of the original programmers of the optimizer code. He apologized for earlier versions (before version 7), but had tremendous insight into how SQL Server will pick an index. In the simplest case, if you have an index on LastName and your query is something like "SELECT * FROM CustName WHERE LastName LIKE 'SMI%'", then the optimizer will likely pick it. There's a chance it won't, but I wouldn't worry about it. There are a couple of ways to see what indexes are being used, if any. There's a button to show execution plan, but it's a bit hard to read if you ask me. There are a couple of SET options that will turn on better plan analysis if you want. It will show what indexes are being used, in what order, how many records each index will filter to, and whether the index is SEEKing (good) or SCANing (bad). It will also show you if there are no indexes being used (table scan, usually bad). -ner
-
Maybe post the MSIL next time instead of the EXE? I worry that someone may post something malicious and say "Well you should have used the reflector to see that I was going to format your hard drive". -ner
-
stustarz is right, I would bet. If you want a true NULL passed to a stored proc, you use System.DBNull.Value. For example: cmd.Parameters("@suburbtown").Value = System.DBNull.Value -ner
-
Last I heard/saw, fxcop has been included/integrated into the new Visual Studio. What's more, you can set options on the new sourcesafe to not allow checkins if the code isn't "clean". I'm not 100% sure that it can see the "errors" that fxcop finds, but it will have options to not allow checkins if there are syntax errors or other Build Errors (which might be Warnings treated as errors). And yes, fxcop is VERY picky, but very nice. -ner
-
Try looking at: http://www.support.microsoft.com/kb/105675 Or googling for: "first chance exception" site:microsoft.com Sounds like there's some unmanaged code throwing an exception, likely because you've unplugged all your sensors? I've not had this message come up so I can't offer much else. -ner
-
Maybe it's just me, but I like to read the code like I think about it. I'd put z on the left of each comparison, like so: "if (z>x && z<y)" Meaning, I want to compare z to something, not compare x and y to something. In my head I say "If z is greater than x and less than y" so it's nicer to read the code that way. Or even better, refactor to a function: private bool Between(int compareNumber, int boundA, int boundB) { .... } if(Between(z, x, y)) { .... } Maybe not the best names (Between, boundA, etc.), but still... make the code readable. Then the swapping of x and y can be hidden in the function to handle when boundA > boundB or when boundA < boundB. -ner
-
C# - Copying Rows between DataTables
Nerseus replied to EFileTahi-A's topic in Database / XML / Reporting
First, you don't need to Clear the rows in the code below, Clone only copies the Schema, not the data. The Copy method copies both the schema and the data: dtDocLinTemp = dtTemp.Clone(); // Don't need to do the following // dtDocLinTemp.Rows.Clear(); Second, I mentioned (twice) that you have to use ImportRow. Add only works for DataRows not associated to a DataTable, such as when you use NewRow to create a row. Here's probably what you want: Datatable DocLinTemp = DB_Engine.dtDocLinTemp.Clone(); //So now I supposely just need to put them back on dtInDocLin foreach(DataRow row in DB_Engine.dtDocLinTemp.Rows) { DocLinTemp.Rows.ImportRow(row); } In the foreach above, I just coded it to loop through every row in the DataTable dtDocLinTemp. If you truly wanted the Select() method to do filtering or sorting, you could. Since you were just using it to get a DataRow array, I removed that extra step. NOTE: In your last post, the second code snippet has THREE DataTables: Datatable DocLinTemp // This is #1 DB_Engine.dtDocLinTemp // This is #2, used in the Select dtInDocLin.Rows.Add // This is #3 I have no idea what dtInDocLin is, if it has the same schema as DB_Engine.dtDocLinTemp. I'm guessing that your code was just a typo and that the last line should have been DocLinTemp. But you'd have to make sure the schema came from the table you're importing rows from. -ner -
@Roey: Generally, you'll find more smaller classes will popup. A first pass might yield one big class with 50 properties, but some good refactoring will help you find smaller classes that make the code more managable. One technique I found to work well was to take that existing procedural code (even if it has a lot of events and other non-procedural code) and refactor everything into one or two classes (as a first pass). As you start refactoring out temp variables into functions/methods you'll probably start to recognize some "classes". As you use the functions and methods more, you may refactor more and more until the code starts pointing you to the objects. It's a bit backwards from the usual "identify the classes first" approach of OOP, but works very well when you start with a lot of non OO code. I guess the simple answer is: more smaller classes. -ner
-
I think Lanc is just trying to create a simple game with some complex logic and needs some help. He doesn't have any requirements except what he's making up himself. He does seem to want/need to evaluate the expression, including operator precedence since his first sample was "100+25*2" which should be 150 (if you multiply then add), not 250 (if you add then multiply). @Lanc: Personally, I admire you for trying something rather difficult for a project. If you want some advice, I would create a few more small test apps to test out code you're writing, such as a function that takes a string and returns the decimal return value. Once you have that working in a test project, move that code to your real game code. As for how to solve this particular issue... it's probably time you picked up an algorithms book and maybe a good .NET book. -ner
-
I was asking because the SQL didn't look like it was doing what you'd expect, if it were truly two different databases. Suppose the SQL built in step 8 looks like this (I made up the path): INSERT INTO InDocLin SELECT * FROM doclintemp IN 'c:\DataTemp.mdb' The table DocLinTemp is coming from the database c:\DataTemp.mdb, not the other one. At least, that's what my tests show. I think the code is assuming that it's using the DocLinTemp table from Step 7 - the one getting updated. I can't explain why the MessageBox would change this though. Here was my test: Create two databases: db1.mdb and db2.mdb. db1 has table DocLinTemp ONLY db2 has table InDocLin ONLY Using similar code to yours I create a connection to db1 and perform the UPDATE. Then, still using that connection, I issue the INSERT INTO command that includes the "IN c:\db2.mdb" query. I get an error that table DocLinTemp doesn't exist. So even though I'm using a connection to db1 which has the table, it seems like the query isn't really using that table. What do you make of that? -ner
-
Adding multiple rows to the server at a time.
Nerseus replied to JarHead's topic in Database / XML / Reporting
If you must use code - eg, Postgresql doesn't support any kind of bulk copy - then I'd recommend dynamic SQL, batching up mutliple inserts (if that's what you need) into one chunk of SQL. Finding a balance is up to you, but I generally start with 50 records or so. Something like: sql = "INSERT INTO Table1 (col1, col2) VALUES (123, 'Hello')" + Environment.NewLine sql = sql + "INSERT INTO Table1 (col1, col2) VALUES (456, 'World')" For "batch jobs" where speed is a concern, you should generally try to avoid "good programming" practices of using Command objects and Paramater objects as they're a lot slower than you probably want. For 200,000 rows it's a toss up - that's kinda in-between size so you may not need to sacrifice "good programming" for speed. Maintenance is also important. If you decide to not use dynamic SQL, maybe let us know some more details: * what kind of code you've got so far * how big each row is (how many columns and relative sizes * what kind of machine you're running against (the server) * what kind of data source you have (where do you get your data) * what you expect in terms of speed (how long to run) -ner -
I'd suggest the following book: Test-Driven Development in Microsoft .NET It goes into detail on how to test DB related code as well as webservices, another tricky one. The basic idea is to have a "fake" set of data using NUnit. For example, in order to test some insert code, you have to know what you're inserting and verify that it got inserted. It's tedious, but worth it the very first time someone reports a bug. If you have a medium to large sized project you may think that the test code is going to take way longer to write - and you'd be absolutely correct. But, when you get near the end of the project and people start reporting more bugs (even when they're really just Change Orders), the test code will start to show its value. -ner
-
Adding multiple rows to the server at a time.
Nerseus replied to JarHead's topic in Database / XML / Reporting
If this is SQL server, you can't really beat bcp though DTS is nearly as fast and is a lot more robust. For "batch jobs" that extract or import data from other sources (files, excel, another database) DTS is the recommended approach and is worth investigating. -ner -
I have both VS6 and VS2003 and I have noticed no compatability problems. I use VC++ 6 frequently and VC++ .NET a bit less. I have small projects in Managed C++ (.NET 2003) and a slightly larger project in standard C++ .NET 2003 with no problems. For awhile, probably close to a year ago, I had VS6 and VS2002 installed together with no issues though I didn't use both actively at the time (was using VS2002 for C# projects). I did have the older VS6 installed first, though I have no idea if that's a requirement, a suggestion, or anything else. -ner