instruction
stringlengths 21
27.8k
| chosen
stringlengths 18
28.2k
| rejected
stringlengths 18
33.6k
| __index_level_0__
int64 0
50k
|
---|---|---|---|
<p>I have a number of native C++ libraries (Win32, without MFC) compiling under Visual Studio 2005, and used in a number of solutions. </p>
<p>I'd like to be able to choose to compile and link them as either static libraries or DLLs, depending on the needs of the particular solution in which I'm using them.</p>
<p>What's the best way to do this? I've considered these approaches:</p>
<h2>1. Multiple project files</h2>
<ul>
<li>Example: "foo_static.vcproj" vs "foo_dll.vcproj"</li>
<li>Pro: easy to generate for new libraries, not too much manual vcproj munging.</li>
<li>Con: settings, file lists, etc. in two places get out of sync too easily.</li>
</ul>
<h2>2. Single project file, multiple configurations</h2>
<ul>
<li>Example: "Debug | Win32" vs "Debug DLL | Win32", etc.</li>
<li>Pro: file lists are easier to keep in sync; compilation options are somewhat easier to keep in sync</li>
<li>Con: I build for both Win32 and Smart Device targets, so I already have multiple configurations; I don't want to make my combinatorial explosion worse ("Static library for FooPhone | WinMobile 6", "Dynamic library for FooPhone | WinMobile 6", "Static library for BarPda | WinMobile 6", etc. </li>
<li>Worse Con: VS 2005 has a bad habit of assuming that if you have a configuration defined for platform "Foo", then you really need it for all other platforms in your solution, and haphazardly inserts all permutations of configuration/platform configurations all over the affected vcproj files, whether valid or not. (Bug filed with MS; closed as WONTFIX.)</li>
</ul>
<h2>3. Single project file, selecting static or dynamic via vsprops files</h2>
<ul>
<li>Example: store the appropriate vcproj fragments in property sheet files, then apply the "FooApp Static Library" property sheet to config/platform combinations when you want static libs, and apply the "FooApp DLL" property sheet when you want DLLs.</li>
<li>Pros: <strong>This is what I really want to do!</strong></li>
<li>Cons: <strong>It doesn't seem possible.</strong> It seems that the .vcproj attribute that switches between static and dynamic libraries (the ConfigurationType attribute of the Configuration element) isn't overrideable by the .vsprops file. Microsoft's published schema for these files lists only <Tool> and <UserMacro> elements.</li>
</ul>
<p><strong>EDIT</strong>: In case someone suggests it, I've also tried a more "clever" version of #3, in which I define a .vsprops containing a UserMacro called "ModuleConfigurationType" with a value of either "2" (DLL) or "4" (static library), and changed the configuration in the .vcproj to have <code>ConfigurationType="$(ModuleConfigurationType)"</code>. Visual Studio silently and without warning removes the attribute and replaces it with <code>ConfigurationType="1"</code>. So helpful!</p>
<p>Am I missing a better solution?</p>
| <p>I may have missed something, but why can't you define the DLL project with no files, and just have it link the lib created by the other project?
And, with respect to settings, you can factor them out in vsprop files...</p>
| <p>Why not go for version 1 and generate the second set of project files from the first using a script or something. That way you know that the differences are JUST the pieces required to build a dll or static lib.</p>
| 49,256 |
<p>I know that python has a <code>len()</code> function that is used to determine the size of a string, but I was wondering why it's not a method of the string object?</p>
| <p>Strings do have a length method: <code>__len__()</code></p>
<p>The protocol in Python is to implement this method on objects which have a length and use the built-in <a href="http://www.python.org/doc/2.5.2/lib/built-in-funcs.html#l2h-45" rel="noreferrer"><code>len()</code></a> function, which calls it for you, similar to the way you would implement <code>__iter__()</code> and use the built-in <code>iter()</code> function (or have the method called behind the scenes for you) on objects which are iterable.</p>
<p>See <a href="http://www.python.org/doc/2.5.2/ref/sequence-types.html" rel="noreferrer">Emulating container types</a> for more information.</p>
<p>Here's a good read on the subject of protocols in Python: <a href="http://lucumr.pocoo.org/2011/7/9/python-and-pola/" rel="noreferrer">Python and the Principle of Least Astonishment</a></p>
| <p>It doesn't?</p>
<pre><code>>>> "abc".__len__()
3
</code></pre>
| 29,344 |
<p>I've recently been looking into targeting the .NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations: </p>
<ul>
<li>Windows XP SP2+</li>
<li><strike>Windows Server 2003</strike> <strong>Edit:</strong> <a href="http://blogs.windowsclient.net/trickster92/archive/2008/05/21/introducing-the-net-framework-client-profile.aspx" rel="nofollow noreferrer">Appears</a> the Client Profile will not install on Windows Server 2003.</li>
</ul>
<p>In addition, the client profile is <strong>not</strong> valid for x64 or ia64 editions; and will also not install if <em>any previous version of the .NET Framework has been installed</em>.</p>
<p>I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the .NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically.</p>
<p>Has anyone else determined if it is worth the extra effort to target these specific users?</p>
<p><strong>Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration.</strong> Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+.</p>
| <p>Ultimately, it will not hurt any users if you target the Client Profile. This is because the client profile is a subset of the .net framework v3.5 sp1, and if v3.5 sp1 is already installed you don't need to install anything. </p>
<p>The assemblies in the client profile are the same binaries as the full framework, so unless you're loading assemblies dynamically, then you shouldn't need to do any additional testing. </p>
<p>My thinking is that unless you must use assemblies which are NOT in the client profile, then you should target it. </p>
<p>As for the OS requirements, WPF won't run on pre-XP sp2, so if you need to run on other OSes, then you'll have to use WinForms anyways.</p>
<p>EDIT:</p>
<blockquote>
<p>On IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:</p>
</blockquote>
<p>Actually so does FF3+3.5sp1:</p>
<blockquote>
<p>Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 (.NET CLR 3.5.30729)</p>
</blockquote>
| <blockquote>
<p>I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available.</p>
</blockquote>
<p>On IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:</p>
<pre><code>Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; .NET CLR 2.0.50727).
</code></pre>
| 3,648 |
<p>Was considering the <code>System.Collections.ObjectModel ObservableCollection<T></code> class. This one is strange because </p>
<ul>
<li>it has an Add Method which takes <strong>one</strong> item only. No AddRange or equivalent. </li>
<li>the Notification event arguments has a NewItems property, which is a <strong>IList</strong> (of objects.. not T)</li>
</ul>
<p>My need here is to add a batch of objects to a collection and the listener also gets the batch as part of the notification. Am I missing something with ObservableCollection ? Is there another class that meets my spec?</p>
<p><em>Update: Don't want to roll my own as far as feasible. I'd have to build in add/remove/change etc.. a whole lot of stuff.</em></p>
<hr>
<p>Related Q:<br>
<a href="https://stackoverflow.com/questions/670577/observablecollection-doesnt-support-addrange-method-so-i-get-notified-for-each/670579#670579">https://stackoverflow.com/questions/670577/observablecollection-doesnt-support-addrange-method-so-i-get-notified-for-each</a></p>
| <p>It seems that the <code>INotifyCollectionChanged</code> interface allows for updating when multiple items were added, so I'm not sure why <code>ObservableCollection<T></code> doesn't have an <code>AddRange</code>. You could make an extension method for <code>AddRange</code>, but that would cause an event for every item that is added. If that isn't acceptable you should be able to inherit from <code>ObservableCollection<T></code> as follows:</p>
<pre><code>public class MyObservableCollection<T> : ObservableCollection<T>
{
// matching constructors ...
bool isInAddRange = false;
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
// intercept this when it gets called inside the AddRange method.
if (!isInAddRange)
base.OnCollectionChanged(e);
}
public void AddRange(IEnumerable<T> items)
{
isInAddRange = true;
foreach (T item in items)
Add(item);
isInAddRange = false;
var e = new NotifyCollectionChangedEventArgs(
NotifyCollectionChangedAction.Add,
items.ToList());
base.OnCollectionChanged(e);
}
}
</code></pre>
| <p>Take a look at <a href="https://stackoverflow.com/questions/670577/observablecollection-doesnt-support-addrange-method-so-i-get-notified-for-each/670579#670579">Observable collection with AddRange, RemoveRange and Replace range methods</a> in both C# and VB.</p>
<p>In VB: INotifyCollectionChanging implementation.</p>
| 8,123 |
<p>If you think it shouldn't, explain why.</p>
<p>If yes, how deep should the guidelines be in your opinion? For example, indentation of code should be included?</p>
| <p>I think a <em>team</em> (rather than a <em>company</em>) need to agree on a set of guidelines for reasonably consistent style. It makes it more straightforward for maintenance. </p>
<p>How deep? As shallow as you can agree on. The shorter and clearer it is the more likely it is that all the team members can agree to it and will abide by it.</p>
| <p>Yes, I think companies should. Developer may need to get used to the coding-style but in my opinion a good programmer should be able to work with any coding style. As Midhat said: It is important to have a consistent codebase.</p>
<p>I think this is also important for opensource projects, there is no supervisor to tell you how to write your code but many languages have specifications on how naming and organisation of your code should be. This helps a lot when integrating opensource components into your project. </p>
| 17,543 |
<p>The <code>Open</code> button on the open file dialog used in certain windows applications includes a dropdown arrow with a list of additional options — namely <code>Open With..</code>. </p>
<p><a href="https://i.stack.imgur.com/GLM3T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GLM3T.png" alt="Open File Dialog"></a></p>
<p>I haven't seen this in every Windows application, so you may have to try a few to get it, but SQL Server Management Studio and Visual Studio 2017 will both show the button that way if you go to the menu and choose <em><code>File</code>-><code>Open</code>-><code>File...</code></em></p>
<p>I want to use a button like this with a built-in list in one of my applications, but I can't find the control they're using anywhere in Visual Studio. I should clarify that I'm looking for that specific button, not the entire dialog. Any thoughts?</p>
| <p>I used the draggable search in Spy++ (installed with VS) to look at the split open button on the file-open dialog of VS.</p>
<p>This revealed that it's an ordinary windows button with a style which includes BS_DEFSPLITBUTTON. That's a magic keyword which gets you to some interesting places, including</p>
<p><a href="http://www.codeplex.com/windowsformsaero/SourceControl/FileView.aspx?itemId=212902&changeSetId=9930" rel="noreferrer">http://www.codeplex.com/windowsformsaero/SourceControl/FileView.aspx?itemId=212902&changeSetId=9930</a></p>
<p>and here</p>
<p><a href="http://msdn.microsoft.com/en-us/library/bb775949.aspx#using_splits" rel="noreferrer">http://msdn.microsoft.com/en-us/library/bb775949.aspx#using_splits</a></p>
<p>Hope this helps you. </p>
<p>EDIT:</p>
<p>I've actually just tried that code from CodePlex and it does create a split button - but you do have to make sure you've set the button's FlatStyle to 'System' rather than 'Standard' which is the default. I've not bothered to hook-up the event handling stuff for the drop-down, but that's covered in the MSDN link, I think.</p>
<p>Of course, this is Vista-only (but doesn't need Aero enabled, despite the name on codeplex) - if you need earlier OS support, you'll be back to drawing it yourself.</p>
| <p>Since I found the control in Windows itself, I was hoping to find it built-in somewhere already so I didn't have to add anything to my code-base to use it. But the split button at <a href="http://blogs.msdn.com/jfoscoding/articles/491523.aspx" rel="nofollow noreferrer">this link</a> (found via the msdn suggestion) looks pretty promising.</p>
<p>I'll try it later myself, but I don't know how well it will handle visual styles.</p>
| 5,098 |
<p>I'm trying to write a (sh -bourne shell) script that processes lines as they are written to a file. I'm attempting to do this by feeding the output of <code>tail -f</code> into a <code>while read</code> loop. This tactic seems to be proper based on my research in Google as well as <a href="https://stackoverflow.com/questions/157163/how-to-do-something-with-bash-when-a-text-line-appear-to-a-file">this question</a> dealing with a similar issue, but using bash.</p>
<p>From what I've read, it seems that I should be able to break out of the loop when the file being followed ceases to exist. It doesn't. In fact, it seems the only way I can break out of this is to kill the process in another session. <code>tail</code> does seem to be working fine otherwise as testing with this:</p>
<pre>
touch file
tail -f file | while read line
do
echo $line
done
</pre>
<p>Data I append to <code>file</code> in another session appears just file from the loop processing written above.</p>
<p>This is on HP-UX version B.11.23.</p>
<p>Thanks for any help/insight you can provide!</p>
| <p>If you want to break out, when your file does not exist any more, just do it:</p>
<pre><code> test -f file || break
</code></pre>
<p>Placing this in your loop, should break out. </p>
<p>The remaining problem is, how to break the read line, as this is blocking.</p>
<p>This could you do by applying a timeout, like read -t 5 line. Then every 5 second the read returns, and in case the file does not longer exist, the loop will break. Attention: Create your loop that it can handle the case, that the read times out, but the file is still present.</p>
<p>EDIT: Seems that with timeout read returns false, so you could combine the test with the timeout, the result would be:</p>
<pre><code> tail -f test.file | while read -t 3 line || test -f test.file; do
some stuff with $line
done
</code></pre>
| <p>I don't know about HP-UX <code>tail</code> but GNU <code>tail</code> has the <code>--follow=name</code> option which will follow the file by name (by re-opening the file every few seconds instead of reading from the same file descriptor which will not detect if the file is unlinked) and will exit when the filename used to open the file is unlinked:</p>
<pre><code>tail --follow=name test.txt
</code></pre>
| 44,098 |
<p>My team is responsible for the development of an API for a large system that we also write. We need to provide example code so that other developers using our API can learn how to use it. We have been documenting the code using the xml document comments.
eg.</p>
<pre><code>/// <summary>Summary here</summary>
/// <example>Here is an example <code>example code here</code> </example>
public void SomeFunction()
</code></pre>
<p>We then use Sandcastle and build the help files we need (chm and an online website).</p>
<p>It is quite embarrassing when the example code doesnt work, and this is usually because some functionality has changed or a simple error.</p>
<p>Has anyone ever done something like this, but also configured unit tests to run on the example code so that they are known to work during the build?</p>
| <p>Yes, sandcastle supports this and it's great to maintain the correctness of examples. You can point to a code region like this:</p>
<pre><code> /// <summary>
/// Gizmo which can act as client or server.
/// </summary>
/// <example>
/// The following example shows how to use the gizmo as a client:
/// <code lang="cs"
/// source="..\gizmo.unittests\TestGizmo.cs"
/// region="GizmoClientSample"/>
/// </example>
public class Gizmo
</code></pre>
<p>You can then use some test code in TestGizmo.cs as an example by enclosing it in a region:</p>
<pre><code>[Test]
public GizmoCanActAsClient()
{
#region GizmoClientSample
Gizmo gizmo = new Gizmo();
gizmo.ActAsClient();
#endregion
}
</code></pre>
<p>Caveat: If you move or rename the test file, you will only get an error about this when you try to regenerate the documentation with sandcastle.</p>
| <p><strong>Simple solution:</strong>
Make a small application in which you include all the sample code headers and then call their respective entry points</p>
<pre><code>#include "samples/sampleA.h"
void main()
{
SomeFunction();
}
</code></pre>
<p>then after you make a build run these little apps you need to be sure they ran ok.
But can you verify that the code ran ok without having someone have a slumber party with the NightlyBuild server?</p>
<p><strong>Better Solution:</strong> Log the output and have someone look at it in the morning.</p>
<p><strong>Even Better Solution:</strong> Log the output and grep it or something so no one has to look at it unless its broken.</p>
<p><strong>Best Solution:</strong> Find a suitable test framework, hopefully something with all the bells and whistles you can get so it can email people if its broken or something like that. In our case we avoid the bells and whistles instead we connected a <strong>USB Police Siren</strong> that goes off when something breaks It's quite exciting!</p>
| 38,655 |
<p>Suppose I have a collection (be it an array, generic List, or whatever is the <strong>fastest</strong> solution to this problem) of a certain class, let's call it <code>ClassFoo</code>:</p>
<pre><code>class ClassFoo
{
public string word;
public float score;
//... etc ...
}
</code></pre>
<p>Assume there's going to be like 50.000 items in the collection, all in memory.
Now I want to obtain as fast as possible all the instances in the collection that obey a condition on its bar member, for example like this:</p>
<pre><code>List<ClassFoo> result = new List<ClassFoo>();
foreach (ClassFoo cf in collection)
{
if (cf.word.StartsWith(query) || cf.word.EndsWith(query))
result.Add(cf);
}
</code></pre>
<p>How do I get the results as fast as possible? Should I consider some advanced indexing techniques and datastructures?</p>
<p>The application domain for this problem is an autocompleter, that gets a query and gives a collection of suggestions as a result. Assume that the condition doesn't get any more complex than this. Assume also that there's going to be a lot of searches.</p>
| <p>With the constraint that the condition clause can be "anything", then you're limited to scanning the entire list and applying the condition.</p>
<p>If there are limitations on the condition clause, then you can look at organizing the data to more efficiently handle the queries.</p>
<p>For example, the code sample with the "byFirstLetter" dictionary doesn't help at all with an "endsWith" query.</p>
<p>So, it really comes down to what queries you want to do against that data.</p>
<p>In Databases, this problem is the burden of the "query optimizer". In a typical database, if you have a database with no indexes, obviously every query is going to be a table scan. As you add indexes to the table, the optimizer can use that data to make more sophisticated query plans to better get to the data. That's essentially the problem you're describing.</p>
<p>Once you have a more concrete subset of the types of queries then you can make a better decision as to what structure is best. Also, you need to consider the amount of data. If you have a list of 10 elements each less than 100 byte, a scan of everything may well be the fastest thing you can do since you have such a small amount of data. Obviously that doesn't scale to a 1M elements, but even clever access techniques carry a cost in setup, maintenance (like index maintenance), and memory.</p>
<p><strong>EDIT</strong>, based on the comment</p>
<p>If it's an auto completer, if the data is static, then sort it and use a binary search. You're really not going to get faster than that.</p>
<p>If the data is dynamic, then store it in a balanced tree, and search that. That's effectively a binary search, and it lets you keep add the data randomly.</p>
<p>Anything else is some specialization on these concepts.</p>
| <p>Not sure I understand... All you can really do is optimize the rule, that's the part that needs to be fastest. You can't speed up the loop without just throwing more hardware at it. </p>
<p>You could parallelize if you have multiple cores or machines.</p>
| 12,280 |
<p>I am a totally blind programmer who would like to learn Python. Unfortunately the fact that code blocks are represented with different levels of indentation is a major stumbling block. I was wondering if there were any tools available that would allow me to write code using braces or some other code block delimiter and then convert that format into a properly indented representation that the Python interpreter could use?</p>
| <p>There's a solution to your problem that is distributed with python itself. <code>pindent.py</code>, it's located in the Tools\Scripts directory in a windows install (my path to it is C:\Python25\Tools\Scripts), it looks like you'd have to <a href="https://svn.python.org/projects/python/trunk/Tools/scripts/pindent.py" rel="nofollow noreferrer">grab it from svn.python.org</a> if you are running on Linux or OSX. </p>
<p>It adds comments when blocks are closed, or can properly indent code if comments are put in. Here's an example of the code outputted by pindent with the command:</p>
<p><code>pindent.py -c myfile.py</code></p>
<pre><code>def foobar(a, b):
if a == b:
a = a+1
elif a < b:
b = b-1
if b > a: a = a-1
# end if
else:
print 'oops!'
# end if
# end def foobar
</code></pre>
<p>Where the original <code>myfile.py</code> was: </p>
<pre><code>def foobar(a, b):
if a == b:
a = a+1
elif a < b:
b = b-1
if b > a: a = a-1
else:
print 'oops!'
</code></pre>
<p>You can also use <code>pindent.py -r</code> to insert the correct indentation based on comments (read the header of pindent.py for details), this should allow you to code in python without worrying about indentation.</p>
<p>For example, running <code>pindent.py -r myfile.py</code> will convert the following code in <code>myfile.py</code> into the same properly indented (and also commented) code as produced by the <code>pindent.py -c</code> example above:</p>
<pre><code>def foobar(a, b):
if a == b:
a = a+1
elif a < b:
b = b-1
if b > a: a = a-1
# end if
else:
print 'oops!'
# end if
# end def foobar
</code></pre>
<p>I'd be interested to learn what solution you end up using, if you require any further assistance, please comment on this post and I'll try to help.</p>
| <p>There are various answers explaining how to do this. But I would recommend not taking this route. While you could use a script to do the conversion, it would make it hard to work on a team project.</p>
<p>My recommendation would be to configure your screen reader to announce the tabs. This isn't as annoying as it sounds, since it would only say "indent 5" rather than "tab tab tab tab tab". Furthermore, the indentation would only be read whenever it changed, so you could go through an entire block of code without hearing the indentation level. In this way hearing the indentation is no more verbose than hearing the braces.</p>
<p>As I don't know which operating system or screen reader you use I unfortunately can't give the exact steps for achieving this.</p>
| 14,445 |
<p>What files do I need to put the header comment in for adding GPL to a C# project? </p>
<p>Does form generated code require it?</p>
<p>Does just need to be in every *.cs file?</p>
<p>Is there a resource or in-depth list of language-specific steps required to add GPL to any kind of project?</p>
| <p>The canonical answer is in the <a href="http://www.gnu.org/licenses/gpl-howto.html" rel="noreferrer">GPL Howto</a>:</p>
<blockquote>
<p>Whichever license you plan to use, the
process involves adding two elements
to each source file of your program: a
copyright notice (such as “Copyright
1999 Terry Jones”), and a statement of
copying permission, saying that the
program is distributed under the terms
of the GNU General Public License (or
the Lesser GPL).</p>
</blockquote>
<p>The recommended header for applying the GPL is:</p>
<blockquote>
<p>Copyright 200X My Name</p>
<p>This file is part of Foobar.</p>
<p>Foobar is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.</p>
<p>Foobar is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.</p>
<p>You should have received a copy of the GNU General Public License
along with Foobar. If not, see <a href="http://www.gnu.org/licenses/" rel="noreferrer">http://www.gnu.org/licenses/</a>.</p>
</blockquote>
<p>Yes, it SHOULD be added to <strong>every file</strong>, since you cannot legally depend upon the assumption that every recipient receives your work as a whole. And, no, it doesn't have to be the complete license text.</p>
| <p>Please notice that the FSF postal address is not 59 Temple Place, but the one below.</p>
<blockquote>
<p>Free Software Foundation, Inc.<br>
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA</p>
</blockquote>
<p>The only way not to screw up the license text is to take it from GNU web site. Notice that the site has licenses in plain text format, which is usually preferred format in comparison to html.</p>
<p><a href="http://www.gnu.org/licenses/" rel="nofollow">http://www.gnu.org/licenses/</a></p>
| 18,481 |
<p>First off, how do I know if my html file is running on localhost in Xampp?
Is there a tutorial on how to manage files/directories and get that all working under htdocs?
Is there a good tutorial on how to setup includes?</p>
<p>I want to use "includes" in Xampp with my html.
Can I use both html includes AND php includes?
Do I have to put shtml?
Can I use shtml, html, htm, and php includes?
Do they have to be in an includes directory that is a subdirectory right under htdocs?
Can I reference includes in some other subdirectory?
My site will have over 100 pages, and I am trying to do "experiments" with different versions until I am happy. So, I have subdirectories for the various drop down menus. Unfortunately, I don't seem to be able to get this working in xampp.
Having trouble getting my javascript menus from Vista Buttons to show up, now that I moved my main directory for my site to the htdocs directory.</p>
| <p>Since <strong>XAMPP</strong> uses <strong>Apache</strong> you need to configure it to permit <strong>SSI</strong>.</p>
<blockquote>
<p>To permit SSI on your server, you must have the following directive either in your httpd.conf file, or in a .htaccess file:</p>
<pre><code>Options +Includes
</code></pre>
<p>This tells Apache that you want to permit files to be parsed for SSI directives. Note that most configurations contain multiple Options directives that can override each other. You will probably need to apply the Options to the specific directory where you want SSI enabled in order to assure that it gets evaluated last.</p>
<p>Not just any file is parsed for SSI directives. You have to tell Apache which files should be parsed. There are two ways to do this. You can tell Apache to parse any file with a particular file extension, such as .shtml, with the following directives:</p>
<pre><code>AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
</code></pre>
<p>One disadvantage to this approach is that if you wanted to add SSI directives to an existing page, you would have to change the name of that page, and all links to that page, in order to give it a .shtml extension, so that those directives would be executed.</p>
<p>The other method is to use the XBitHack directive:</p>
<pre><code>XBitHack on
</code></pre>
<p>XBitHack tells Apache to parse files for SSI directives if they have the execute bit set. So, to add SSI directives to an existing page, rather than having to change the file name, you would just need to make the file executable using chmod.</p>
<pre><code>chmod +x pagename.html
</code></pre>
</blockquote>
<p>According to <a href="http://httpd.apache.org/docs/2.2/howto/ssi.html#configuring" rel="nofollow noreferrer">Apache Tutorial: Introduction to Server Side Includes</a></p>
| <p>You might want to look at AMPstart instead of xampp-control. It has some nice ability to allow you to place site-folders outside of htdocs w/o messing around with apache conf stuff</p>
| 40,300 |
<p>I am using iText to generate PDF invoices for a J2EE web application and included on the page is an image read from a URL constructed from the request URL. In the development and test environments this works fine, but in production I get a java.io.IOException: is not a recognized imageformat.</p>
<p>If I paste the url into my browser then the correct image is returned, however the request is redirected from http to https. In my code if I hard code the redirect URL then the image is displayed correctly. </p>
<p>So it seems that when retrieving the image using com.lowagie.text.Image.getInstance(URL), the redirects on the URL are not being followed. How can I output an image from a redirected URL using iText?</p>
| <p>Well,</p>
<p>If you ask for an image from a URL, it must actually point to the image. If the URL points to a web page that then redirects to another URL (or the return code from the URL is a redirection), then it is going to fail.</p>
<p>This is essentially due to the getInstance() method understanding how to use the HTTP location protocol to get a file, but not understanding the HTTP protocol enough to be a HTTP client.</p>
<p>You could just use the 'https' address, or you could store the image with your program and locate the as CFreiner suggests. If neither of these options are feasible, then your only real solution is to implement code to query the URL, check if it is a redirection and if it is follow the redirection. </p>
| <p>Is there a reason you have to get this using the URL?? Do you have to match the image that the url is pointing to? What if it changes or gets removed?</p>
<p>I am not sure of your requirement, but it may be easier to save the image from the url and place it somewhere within your project. Then you can add it to your pdf with:</p>
<pre><code>Image.getInstance("yourimage.gif");
</code></pre>
| 39,266 |
<p>This is for a small scheduling app. I need an algorithm to efficiently compare two "schedules", find differences, and update only the data rows which have been changed, as well as entries in another table having this table as a foreign key. This is a big question, so I'll say right away I'm looking for either <strong>general advice</strong> or <strong>specific solutions</strong>.</p>
<p><strong>EDIT:</strong> As suggested, I have significantly shortened the question.</p>
<p>In one table, I associate resources with a span of time when they are used. </p>
<p>I also have a second table (Table B) which uses the ID from Table A as a foreign key.</p>
<p>The entry from Table A corresponding to Table B will have a span of time which <strong>subsumes</strong> the span of time from Table B. Not all entries in Table A will have an entry in Table B.</p>
<p>I'm providing an interface for users to edit the resource schedule in Table A. They basically provide a new set of data for Table A that I need to treat as a <em>diff</em> from the version in the DB.</p>
<p>If they completely remove an object from Table A that is pointed to by Table B, I want to remove the entry from Table B as well.</p>
<p>So, given the following 3 sets:</p>
<ul>
<li>The original objects from Table A (from the DB)</li>
<li>The original objects from Table B (from the DB)</li>
<li>The edited set of objects from Table A (from the user, so no unique IDs)</li>
</ul>
<p>I need an algorithm that will:</p>
<ul>
<li>Leave rows in Table A and Table B untouched if no changes are needed for those objects.</li>
<li>Add rows to Table A as needed.</li>
<li>Remove rows from Table A and Table B as needed.</li>
<li>Modify rows in Table A and Table B as needed.</li>
</ul>
<p>Just sorting the objects into an arrangement where I can apply the appropriate database operations is more than adequate for a solution.</p>
<p>Again, please answer as <strong>specifically</strong> or <strong>generally</strong> as you like, I'm looking for advice but if someone has a complete algorithm that would just make my day. :)</p>
<p><strong>EDIT:</strong> In response to lassvek, I am providing some additional detail:</p>
<p>Table B's items are always contained entirely within Table A items, not merely overlapping.</p>
<p><em>Importantly,</em> Table B's items are quantized so they should fall either entirely within or entirely outside. If this doesn't happen, then I have a data integrity error that I'll have to handle separately.</p>
<p>For example (to use a shorthand):</p>
<pre>
Table A
ID Resource Start End
01 Resource A 10/6 7:00AM 10/6 11:00AM
02 Resource A 10/6 1:00PM 10/6 3:00PM
Table B
ID Table_A_ID Start End
01 02 10/6 1:00PM 10/6 2:00PM
</pre>
<p>So I want the following behaviours:</p>
<ul>
<li>If I remove ID 02 from table A, or shorten it to 2:00PM - 3:00PM, I should remove ID 01 from Table B.</li>
<li>If I extend Table A ID 01 to where it ends at 1:00PM, <strong>these two entries should be merged together into one row</strong>, and Table B ID 01 should now point to table A ID 01.</li>
<li>If I remove 8:00AM-10:00AM from Table A ID 01, that entry should be split into two entries: One for 7:00AM-8:00AM, and a new entry (ID 03) for 10:00AM-11:00AM.</li>
</ul>
| <p>I have worked extensively with periods, but I'm afraid I don't understand entirely how table A and B work together, perhaps it's the word <em>subsume</em> that I don't understand.</p>
<p>Can you give some concrete examples of what you want done?</p>
<p>Do you mean that timespans recorded in table A contains entirely timespans in table B, like this?</p>
<pre><code>|---------------- A -------------------|
|--- B ----| |--- B ---|
</code></pre>
<p>or overlaps with?</p>
<pre><code> |---------------- A -------------------|
|--- B ----| |--- B ---|
</code></pre>
<p>or the opposite way, timespans in B contains/overlaps with A?</p>
<p>Let's say it's the first one, where timespans in B are inside/the same as the linked timespan in table A.</p>
<p>Does this mean that:</p>
<pre><code>* A removed A-timespan removes all the linked timespans from B
* An added A-timespan, what about this?
* A shortened A-timespan removes all the linked timespans from B that now falls outside A
* A lenghtened A-timespan, will this include all matching B-timespans now inside?
</code></pre>
<p>Here's an example:</p>
<pre><code>|-------------- A1 --------------| |-------- A2 --------------|
|---- B1 ----| |----- B2 ---| |---- B3 ----| |-- B4 --|
</code></pre>
<p>and then you lengthen A1 and shorten and move A2, so that:</p>
<pre><code>|-------------- A1 ---------------------------------| |--- A2 --|
|---- B1 ----| |----- B2 ---| |---- B3 ----| |-- B4 --|
</code></pre>
<p>this means that you want to modify the data like this:</p>
<pre><code>1. Lengthen (update) A1
2. Shorten and move (update) A2
3. Re-link (update) B3 from A2 to A1 instead
</code></pre>
<p>how about this modification, A1 is lengthened, but not enough to contain B3 entirely, and A2 is moved/shortened the same way:</p>
<pre><code>|-------------- A1 -----------------------------| |--- A2 --|
|---- B1 ----| |----- B2 ---| |---- B3 ----| |-- B4 --|
</code></pre>
<p>Since B3 is now not entirely within either A1 or A2, remove it?</p>
<p>I need some concrete examples of what you want done.</p>
<hr>
<p><strong>Edit</strong> More questions</p>
<p>Ok, what about:</p>
<pre><code>|------------------ A -----------------------|
|------- B1 -------| |------- B2 ------|
|---| <-- I want to remove this from A
</code></pre>
<p>What about this?</p>
<p>Either:</p>
<pre><code>|------------------ A1 ----| |---- A2 -----|
|------- B1 -------| |B3| |--- B2 ---|
</code></pre>
<p>or:</p>
<pre><code>|------------------ A1 ----| |---- A2 -----|
|------- B1 -------|
</code></pre>
<p>To summarize how I see it, with questions, so far:</p>
<ul>
<li>You want to be able to do the following operations on A's
<ul>
<li>Shorten</li>
<li>Lengthen</li>
<li>Combine when they are adjacent, combining two or more into one</li>
<li>Punch holes in them by removing a period, and thus splitting it</li>
</ul></li>
<li>B's that are still contained within an A after the above update, relink if necessary</li>
<li>B's that were contained, but are now entirely outside, delete them</li>
<li>B's that were contained, but are now partially outside, <strong>Edit: Delete these, ref data integrity</strong></li>
<li>For all the above operations, do the least minimum work necessary to bring the data in line with the operations (instead of just removing everything and inserting anew)</li>
</ul>
<p>I'll work on an implementation in C# that might work when I get home from work, I'll come back with more later tonight.</p>
<hr>
<p><strong>Edit</strong> Here's a stab at an algorithm.</p>
<ol>
<li>Optimize the new list first (ie. combine adjacent periods, etc.)</li>
<li>"merge" this list with the master periods in the database in the following way:
<ol>
<li>keep track of where in both lists (ie. new and existing) you are</li>
<li>if the current new period is entirely before the current existing period, add it, then move to the next new period</li>
<li>if the current new period is entirely after the current existing period, remove the existing period and all its child periods, then move to the next existing period</li>
<li>if the two overlap, adjust the current existing period to be equal to the new period, in the following way, then move on to the next new and existing period
<ol>
<li>if new period starts before existing period, simply move the start</li>
<li>if new period starts after existing period, check if any child periods are in the difference-period, and remember them, then move the start</li>
<li>do the same with the other end</li>
</ol></li>
</ol></li>
<li>with any periods you "remembered", see if they needs to be relinked or deleted</li>
</ol>
<p>You should create a massive set of unit tests and make sure you cover all combinations of modifications.</p>
| <p>You post is almost in the "too long; didnt read" category - shortening it will probably give you more feedback.</p>
<p>Anyway, on topic: you can try lookin into a thing called <a href="http://en.wikipedia.org/wiki/Allen's_Interval_Algebra" rel="nofollow noreferrer">"Interval Algebra"</a></p>
| 20,766 |
<p>If you could help me with ANY part of this question, I would appreciate it. Thanks.</p>
<pre><code>2^0 = 1
2^N = 2^(N-1) + 2^(N-1)
</code></pre>
<ol>
<li><p>Convert this definition into an exactly equivalent tree-recursive function called two-to-the-power-of. Describe its asymptotic time complexity and explain why it has this time complexity.</p></li>
<li><p>Now write a function called tttpo_rec which computes the exact same thing, but which uses a linear recursive process which has an O(n) time complexity and also uses O(n) space for its pending operations.</p></li>
<li><p>Now write a function called tttpo_iter which computes the exact same thing, but which uses a linear iterative process which has an O(n) time complexity and also uses constant space.</p></li>
<li><p>Now let's say you want to generalize one of the preceding definitions so that it will handle arbitrary integer powers, so that you can compute 2^N, 3^N etc. Write a function called to-the-power-of that takes two arguments and raises one to the power of the other.</p></li>
</ol>
<p>Here's the template:</p>
<pre><code>;; to-the-power-of: integer integer -> integer
;; This function raises m to the power of n, returning the result.
;; m must be > 0 and n >= 0.
(define (to-the-power-of m n)
...)
(check-expect (to-the-power-of 1 0) 1) ; base case
(check-expect (to-the-power-of 2 3) 8) ; recursive case
(check-expect (to-the-power-of 3 2) 9) ; recursive case
</code></pre>
<p>We'll add one more restriction: you can't use the * operator; you can only use the following recursive function to do multiplications:</p>
<pre><code>;;; multiply: integer integer -> integer
;; This function multiplies two non-negative integers, returning the result.
;; Its time complexity is O(a).
(define (multiply a b)
...)
</code></pre>
<p>Write the function, and describe what its time complexity is and why.</p>
| <p>The hues of magenta, yellow, and cyan are primary for subtractive combination (e.g. paints or inks) rather than additive combination such as light where red, green, and blue are primary.</p>
<p><a href="http://en.wikipedia.org/wiki/Primary_colors" rel="noreferrer">Wikipedia has more detail on the whys and wherefores</a>.</p>
| <p>Because combining light sources (which computer monitors do) does not work the same way as combining printed ink. It's just a guess.</p>
| 28,266 |
<p>I need to add DVD writing functionality to an application I'm working on. However it needs to be able to write out files that are being grabbed "live" from a camera, over a long period of time. I can't wait until all the files are captured before I start writing them to the DVD, I need to write them out in chunks as I go along.</p>
<p>I've looked at <a href="http://msdn.microsoft.com/en-us/library/aa364806(VS.85).aspx" rel="nofollow noreferrer">IMAPI v2</a>, but the main problems seems to be that you need to point it to all the files you plan to write out to disk before you start the burning process. I know it has to concept of "sessions", which means you can write to the DVD in several parts, before you finally "close" it.</p>
<p>But I was wondering if there were any other DVD writing SDK's that allow you to be constantly writing files to a DVD and in particular files that are only in memory. It would be more efficient if I didn't have to write the captured images out to hard before they are burned to DVD.</p>
<p>The solution needs to work under .NET on Windows XP and vista</p>
| <p>The <a href="http://www.primoburner.com/" rel="nofollow noreferrer">Primo burning engine</a> for .Net works nicely. </p>
| <p>Format your optical media to a <a href="http://en.wikipedia.org/wiki/Live_File_System" rel="nofollow">Live File System</a> (<a href="http://en.wikipedia.org/wiki/Packet_writing" rel="nofollow">Incremental Packet Writing</a> instead of using a mastered disc format with IMAPIv2) and then you will be able to add any file just using i.e. <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa363851%28v=vs.85%29.aspx" rel="nofollow">CopyFile</a> without creating new sessions.</p>
<p>This way you will not waste lead-in/lead-out space each time you want to add a new file in a new session...</p>
<p>Notice that to ensure compatibility of disks created on Windows Vista, UDF 2.01 or lower should be selected.</p>
| 8,322 |
<p>I have a legacy VB6 application that was built using MSDE.</p>
<p>As many client's database grow towards the MSDE 2 GB limit they are upgraded to SQL 2005 Express.</p>
<p>This has proven very successful until today.</p>
<p>I have spent the entire day troubleshooting a client's network on which our application runs unacceptably slowly, when connecting the a SQL 2005 Express named instance across the "network". </p>
<p>I say "network" because it is only two XP SP2 machines - there is no dedicated server here. No AD.</p>
<p>In trying to isolate this problem I have installed SQL 2005 Express on both machines and placed copies of our database on both machines. I have even completely reinstalled our application using the SQL2005 Express install routine we now have. It makes no difference whether I restore an old MSDE database or use a newly created SQL 2005 Express one.</p>
<p>When running our application and connecting to either machine's local server performance is fine. Once you connect our application on either PC to the server on the other PC, it is unworkably slow. (Regardless of the combination).</p>
<p>Now, I have rebuilt statistics (exec sp_updatestats), rebuilt ALL indexes, disabled (temporarily) firewalls and virus software and clutched and countless other straws.</p>
<p>I have resorted to running FileMon and ProcessMon on both machines and have even written a little test application to simply connect and query a table in the database. It too runs slowly - (takes about 5 - 6 seconds to connect).</p>
<p>The monitors (File and Process) show delays when SQL Server is writing to a log file (c:\program files\microsoft sql server\mssql.1\log files\log_12.trc).</p>
<p>Other tools though, like SQL Management Studio Express and even SSEUtil (a SQL Server Express Diagnostic Utility I found) run perfectly when connecting from the client to the server. Queries (even large ones) run as you would expect.</p>
<p>I feel sure this problem is environmental as we have so many sites running what would appear to be the same setup, with no such problems.</p>
<p>Can someone tell me what I should be doing to isolate this problem or even offer any clues or suggestions that could help solve this?</p>
| <p>'define' operation itself is rather slow - confirmed by xdebug profiler.</p>
<p>Here is benchmarks from <a href="http://t3.dotgnu.info/blog/php/my-first-php-extension.html" rel="noreferrer">http://t3.dotgnu.info/blog/php/my-first-php-extension.html</a>: </p>
<ul>
<li><p>pure 'define'<br>
380.785 fetches/sec<br>
14.2647 mean msecs/first-response</p></li>
<li><p>constants defined with 'hidef' extension<br>
930.783 fetches/sec<br>
6.30279 mean msecs/first-response </p></li>
</ul>
<hr>
<p><strong>broken link update</strong></p>
<p>The blog post referenced above has left the internet. It can still be viewed <a href="http://web.archive.org/web/20100504144640/http://t3.dotgnu.info/blog/php/my-first-php-extension.html" rel="noreferrer">here via Wayback Machine</a>. Here is another <a href="http://shwup.blogspot.com/2010/04/about-constants.html" rel="noreferrer">similar article</a>.</p>
<p>The libraries the author references can be found <a href="http://sg.php.net/manual/en/function.apc-define-constants.php" rel="noreferrer">here (apc_define_constants)</a> and <a href="http://pecl.php.net/package/hidef" rel="noreferrer">here (hidef extension)</a>.</p>
| <p>Main differences:</p>
<ul>
<li>define is constant, variable is variable </li>
<li>they different scope/visibility</li>
</ul>
| 17,857 |
<p>Anybody know a nice way to restart a mongrel cluster via capistrano in a "rolling" style, eg, one mongrel at a time. Would be great to have a bit of wait time in there as well for each, to let the mongrel load the rails app up as well. </p>
<p>I've done some searching, and haven't found too much, so looking for help before I dive into the mongrel_cluster gem myself.</p>
<p>Thanks!</p>
| <p>I agree with the seesaw approach more than the rolling approach you are seeking. The problem is that you end up in situations where load balancing can throw users back and forth between different versions of the application while you are transitioning.</p>
<p>The solutions we came up with (before finding SeeSaw, which we don't use) was to take half of the mongrels off line from the load balancer. Shut them down. Update them. Start them up. Put those mongrels back online in the load balancer and take the other half off. Shut the second half down. Update the second half. Start them up. This greatly minimizes the time where you have two different versions of the application running simultaneously.
I wrote a windows bat file to do this. (Deploying on Windows is not recommended, btw)</p>
<p>It is very important to note that having database migrations can make the whole approach a little dangerous. If you have only additive migrations, you can run those at any time before the deployment. If you are removing columns, you need to do it after the deployment. If you are renaming columns, it is better to split it into a create a new column and copy data into it migration to run before deployment and a separate script to remove the old column after deployment. In fact, it may be dangerous to use your regular migrations on a production database in general if you don't make a specific effort to organize them. All of this points to making more frequent deliveries so each update is lower risk and less complex, but that's a subject for another response. </p>
| <p>Seesaw is a gem found in the <a href="http://rubyforge.org/projects/rails-oceania/" rel="nofollow noreferrer">Rails Oceania Rubyforge Project</a> that provides this kind of functionality to mongrel clusters. However, the project may be suffering from some bit-rot not havain had a release since 2007. Still worth a look even just to pinch the ideas :)</p>
| 15,114 |
<p>How does operating system know what filesystem a partition is using? In other words, how are FAT16/32, NTFS, ext2/3 etc. distinguished from each other?</p>
| <p>If you're using Win32 APIs on Windows, then you can call GetVolumeInformation (<a href="http://msdn.microsoft.com/en-us/library/aa364993.aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/aa364993.aspx</a>) to determine the type of file system present on a given mounted volume.</p>
<p>For example, if you're trying to detect the file system present on D:, then you can call:</p>
<pre><code>WCHAR FSType[512];
if (GetVolumeInformationW(L"D:\\", NULL, 0, NULL, NULL, NULL, FSType, ARRAYSIZE(FSType))) {
wprintf(L"FS type = %s\n", FSType);
}
</code></pre>
<p>This will only work, however, if the file system is "recognized" and "mountable" by the running operating system.</p>
| <p>On linux when you mount a filesystem, you can pass -t ext3/ext3 etc - if you look in /etc/fstab (or equivalent) each drive probably has its fs type listed.</p>
<p>Then for automatically doing it, there is the superblock/equivalent (think windows types call it something else) ...</p>
<p>See this:</p>
<p>Superblock</p>
<blockquote>
<p>Each file system is different and they
have type like ext2, ext3 etc. Further
each file system has size like 5 GB,
10 GB and status such as mount status.
In short each file system has a
superblock, which contains information
about file system such as:</p>
<pre><code>* File system type
* Size
* Status
* Information about other metadata structures
</code></pre>
</blockquote>
<p>Taken from:</p>
<p><a href="http://www.cyberciti.biz/tips/understanding-unixlinux-filesystem-superblock.html" rel="nofollow noreferrer">http://www.cyberciti.biz/tips/understanding-unixlinux-filesystem-superblock.html</a></p>
| 49,858 |
<p>With the recent buzz on multicore programming is anyone exploring the possibilities of using <a href="http://www-unix.mcs.anl.gov/mpi/" rel="noreferrer">MPI</a> ?</p>
| <p>I've used MPI extensively on large clusters with multi-core nodes. I'm not sure if it's the right thing for a single multi-core box, but if you anticipate that your code may one day scale larger than a single chip, you might consider implementing it in MPI. Right now, nothing scales larger than MPI. I'm not sure where the posters who mention unacceptable overheads are coming from, but I've tried to give an overview of the relevant tradeoffs below. Read on for more.</p>
<p>MPI is the de-facto standard for large-scale scientific computation and it's in wide use on multicore machines already. It is very fast. Take a look at the <a href="http://top500.org/lists/2008/11" rel="noreferrer">most recent Top 500 list</a>. The top machines on that list have, in some cases, hundreds of thousands of processors, with multi-socket dual- and quad-core nodes. Many of these machines have very fast custom networks (Torus, Mesh, Tree, etc) and optimized MPI implementations that are aware of the hardware.</p>
<p>If you want to use MPI with a single-chip multi-core machine, it will work fine. In fact, recent versions of Mac OS X come with <a href="http://www.open-mpi.org/" rel="noreferrer">OpenMPI</a> pre-installed, and you can download an install OpenMPI pretty painlessly on an ordinary multi-core Linux machine. OpenMPI is in use at <a href="http://lanl.gov" rel="noreferrer">Los Alamos</a> on most of their systems. <a href="http://llnl.gov" rel="noreferrer">Livermore</a> uses <a href="http://mvapich.cse.ohio-state.edu/" rel="noreferrer">mvapich</a> on their Linux clusters. What you should keep in mind before diving in is that MPI was designed for solving large-scale scientific problems on <em>distributed-memory</em> systems. The multi-core boxes you are dealing with probably have <em>shared memory</em>.</p>
<p>OpenMPI and other implementations use shared memory for local message passing by default, so you don't have to worry about network overhead when you're passing messages to local processes. It's pretty transparent, and I'm not sure where other posters are getting their concerns about high overhead. The caveat is that MPI is not the <em>easiest</em> thing you could use to get parallelism on a single multi-core box. In MPI, all the message passing is explicit. It has been called the "assembly language" of parallel programming for this reason. Explicit communication between processes isn't easy if you're not an experienced <a href="http://en.wikipedia.org/wiki/High-performance_computing" rel="noreferrer">HPC</a> person, and there are other paradigms more suited for shared memory (<a href="http://upc.lbl.gov/" rel="noreferrer">UPC</a>, <a href="http://openmp.org/wp/" rel="noreferrer">OpenMP</a>, and nice languages like <a href="http://www.erlang.org/" rel="noreferrer">Erlang</a> to name a few) that you might try first.</p>
<p>My advice is to go with MPI if you anticipate writing a parallel application that may need more than a single machine to solve. You'll be able to test and run fine with a regular multi-core box, and migrating to a cluster will be pretty painless once you get it working there. If you are writing an application that will only ever need a single machine, try something else. There are easier ways to exploit that kind of parallelism.</p>
<p>Finally, if you are feeling really adventurous, try MPI in conjunction with threads, OpenMP, or some other local shared-memory paradigm. You can use MPI for the distributed message passing and something else for on-node parallelism. This is where big machines are going; future machines with hundreds of thousands of processors or more are expected to have MPI implementations that scale to all <em>nodes</em> but not all cores, and HPC people will be forced to build hybrid applications. This isn't for the faint of heart, and there's a lot of work to be done before there's an accepted paradigm in this space.</p>
| <p>You have to decide if you want low level threading or high level threading. If you want low level then use pThread. You have to be careful that you don't introduce race conditions and make threading performance work against you.</p>
<p>I have used some OSS packages for (C and C++) that are scalable and optimize the task scheduling. TBB (threading building blocks) and Cilk Plus are good and easy to code and get applications of the ground. I also believe they are flexible enough integrate other thread technologies into it at a later point if needed (OpenMP etc.)</p>
<p>www.threadingbuildingblocks.org
www.cilkplus.org</p>
| 17,796 |
<p>I want to create a dmg file for my Mac project. Can someone please tell me how to do this? This being my first Mac project, I do not have any idea how to proceed. I also want to give the user an option of running the app on start-up. How do I do this?</p>
<p>Thanks.</p>
<p>P.S. I also want to add a custom license agreement.</p>
| <p>To do this manually:</p>
<p><strong>Method 1:</strong></p>
<ul>
<li>Make a folder with the files your DMG will contain.</li>
</ul>
<p><a href="https://i.stack.imgur.com/sOBin.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sOBin.png" alt="enter image description here"></a></p>
<ul>
<li>Open Disk Utility (It's in <code>/Applications/Utilities/</code>)</li>
</ul>
<p><a href="https://i.stack.imgur.com/Px0Zh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Px0Zh.png" alt="enter image description here"></a></p>
<ul>
<li>Go to File > New > New Image from Folder (<code>Cmd + Shift + N</code>)</li>
</ul>
<p><a href="https://i.stack.imgur.com/T2owv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T2owv.png" alt="enter image description here"></a></p>
<ul>
<li>Choose the folder containing you files</li>
<li>Make sure "Compressed" is checked, then set where you want to save the created DMG</li>
</ul>
<p><a href="https://i.stack.imgur.com/cgTGZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cgTGZ.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/Oc9m1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oc9m1.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/u58MR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u58MR.png" alt="enter image description here"></a></p>
<p><strong>Method 2:</strong></p>
<p>To do things like setting a background image can be a bit convoluted (You basically add the background image to the DMG, set the windows properties to use that image, using the command line you move the background image from <code>background.png</code> to <code>.background.png</code> to make it hidden)</p>
<p>I would recommend <a href="http://www.nscoding.co.uk/" rel="nofollow noreferrer">iDMG</a>, which makes things a bit less tedious. </p>
<p>You can also script the creation of DMGs using the command <code>hdiutil</code>. Something along the lines of</p>
<pre><code>hdiutil create -srcfolder mydirtodmg mydmg.dmg
</code></pre>
<p><a href="https://i.stack.imgur.com/8pYQb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8pYQb.png" alt="enter image description here"></a></p>
<p>As for the custom license agreement, you should look into the tool included with the Developer Tools "PackageMaker" - it's pretty self-explanatory. It's in <code>/Developers/Application/Utilities/</code></p>
| <p>I made a little bash script to automate a disc image creation.</p>
<p>It creates a temporary directory to store all needed files then export it in a new DMG file. Temporary directory is then deleted.
You can automatically launch this script at the end of your build process.</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
# Create .dmg file for macOS
# Adapt these variables to your needs
APP_VERS="1.0"
DMG_NAME="MyApp_v${APP_VERS}_macos"
OUTPUT_DMG_DIR="path_to_output_dmg_file"
APP_FILE="path_to_my_app/MyApp.app"
OTHER_FILES_TO_INCLUDE="path_to_other_files"
# The directory of the script
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# The temp directory used, within $DIR
WORK_DIR=`mktemp -d "${DIR}/tmp"`
# Check if tmp dir was created
if [[ ! "${WORK_DIR}" || ! -d "${WORK_DIR}" ]]; then
echo "Could not create temp dir"
exit 1
fi
# Function to deletes the temp directory
function cleanup {
rm -rf "${WORK_DIR}"
#echo "Deleted temp working directory ${WORK_DIR}"
}
# Register the cleanup function to be called on the EXIT signal
trap cleanup EXIT
# Copy application on temp dir
cp -R "${APP_FILE}" "${WORK_DIR}"
# Copy other files without hidden files
rsync -a --exclude=".*" "${OTHER_FILES_TO_INCLUDE}" "${WORK_DIR}"
# Create .dmg
hdiutil create -volname "${DMG_NAME}" -srcfolder "${WORK_DIR}" -ov -format UDZO "${OUTPUT_DMG_DIR}/${DMG_NAME}.dmg"
</code></pre>
| 47,999 |
<p>I have an HTML page (say welcome.html) which contains an iframe to a page I have no control over (say app.html). The user performs some actions using the app within the iframe and clicks submit. Once they do this, they are taken to a new page (say thanks.jsp), which loads within the iframe. Is there a way in which I can force thanks.jsp to load in the full frame and not the iframe once submit is clicked? Remember, I have no control over the logic behind that Submit button or app.html. I do however have control over welcome.html and thanks.jsp. If possible, I would like to stick with HTML and/or JavaScript. Thank you in advance.</p>
| <p>You probably want to use a framebuster, with a base target in case it fails.</p>
<p>First:</p>
<p><strong>If</strong> <code>thanks.jsp</code> is requested via a post request - redirect so it you present the page as the response to a get request.</p>
<p>Then:</p>
<p>Include framebuster JavaScript:</p>
<pre><code><script type="text/javascript">
if (self != top) { top.location.replace(location); }
</script>
</code></pre>
<p>Finally:</p>
<p>In case the user doesn't have JavaScript enabled, make sure they don't stay in the frame any longer then they have to:</p>
<pre><code><base target="_top"></code></pre>
| <p>On <code>thanks.jsp</code> you can put in the following JS:</p>
<pre><code>// Parent window not the same as this one
if (self !=top)
{
top.location.href = self.location.href;
}
</code></pre>
<p>This will work provided that you have <code>thanks.jsp</code> on the same server as the original page containing the frame, due to the <a href="http://en.wikipedia.org/wiki/Same_origin_policy" rel="nofollow noreferrer">same origin policy</a>.</p>
<p>The above code checks the url of the page you're on, then the one of the page it's executing on (<code>thanks.jsp</code>) - if they don't match you're sent to the <code>thanks.jsp</code> url. This method works fine when <code>thanks.jsp</code> is a static page, but won't carry postdata etc across with it.</p>
| 23,611 |
<p>I've not used C++ very much in the past, and have recently been doing a lot of C#, and I'm really struggling to get back into the basics of C++ again. This is particularly tricky as work mandates that none of the most handy C++ constructs can be used, so all strings must be char *'s, and there is no provision for STL lists.</p>
<p>What I'm currently trying to do is to create a list of strings, something which would take me no time at all using STL or in C#. Basically I want to have a function such as:</p>
<pre><code>char **registeredNames = new char*[numberOfNames];
</code></pre>
<p>Then,</p>
<pre><code>RegisterName(const * char const name, const int length)
{
//loop to see if name already registered snipped
if(notFound)
{
registeredNames[lastIndex++] = name;
}
}
</code></pre>
<p>or, if it was C#...</p>
<pre><code>if(!registeredNames.Contains(name))
{
registeredNames.Add(name);
}
</code></pre>
<p>and I realize that it doesn't work. I know the const nature of the passed variables (a const pointer and a const string) makes it rather difficult, but my basic problem is that I've always avoided this situation in the past by using STL lists etc. so I've never had to work around it!</p>
| <p>You'll probably need to use strcmp to see if the string is already stored:</p>
<pre><code>for (int index=0; index<=lastIndex; index++)
{
if (strcmp(registeredNames[index], name) == 0)
{
return; // Already registered
}
}
</code></pre>
<p>Then if you really need to store a copy of the string, then you'll need to allocate a buffer and copy the characters over.</p>
<pre><code>char* nameCopy = malloc(length+1);
strcpy(nameCopy, name);
registeredNames[lastIndex++] = nameCopy;
</code></pre>
<p>You didn't mention whether your input is NULL terminated - if not, then extra care is needed, and strcmp/strcpy won't be suitable.</p>
| <p>If you are not worried about conventions and just want to get the job done use realloc. I do this sort of thing for lists all of the time, it goes something like this:</p>
<pre><code>T** list = 0;
unsigned int length = 0;
T* AddItem(T Item)
{
list = realloc(list, sizeof(T)*(length+1));
if(!list) return 0;
list[length] = new T(Item);
++length;
return list[length];
}
void CleanupList()
{
for(unsigned int i = 0; i < length; ++i)
{
delete item[i];
}
free(list)
}
</code></pre>
<p>There is more you can do, e.g. only realloc each time the list size doubles, functions for removing items from list by index or by checking equality, make a template class for handling lists etc... (I have one I wrote ages ago and always use myself... but sadly I am at work and can't just copy-paste it here). To be perfectly honest though, this will probably not outperform the STL equivalent, although it may equal its performance if you do a ton of work or have an especially poor implementation of STL.</p>
<p>Annoyingly C++ is without an operator renew/resize to replace realloc, which would be very useful.</p>
<p>Oh, and apologies if my code is error ridden, I just pulled it out from memory.</p>
| 11,714 |
<p>How come this doesn't work (operating on an empty select list <code><select id="requestTypes"></select></code></p>
<pre><code>$(function() {
$.getJSON("/RequestX/GetRequestTypes/", showRequestTypes);
}
);
function showRequestTypes(data, textStatus) {
$.each(data,
function() {
var option = new Option(this.RequestTypeName, this.RequestTypeID);
// Use Jquery to get select list element
var dropdownList = $("#requestTypes");
if ($.browser.msie) {
dropdownList.add(option);
}
else {
dropdownList.add(option, null);
}
}
);
}
</code></pre>
<p>But this does:</p>
<ul>
<li><p>Replace:</p>
<p><code>var dropdownList = $("#requestTypes");</code></p></li>
<li><p>With plain old javascript:</p>
<p><code>var dropdownList = document.getElementById("requestTypes");</code></p></li>
</ul>
| <p><code>$("#requestTypes")</code> returns a jQuery object that contains all the selected elements. You are attempting to call the <code>add()</code> method of an individual element, but instead you are calling the <code>add()</code> method of the jQuery object, which does something very different.</p>
<p>In order to access the DOM element itself, you need to treat the jQuery object as an array and get the first item out of it, by using <code>$("#requestTypes")[0]</code>.</p>
| <p>For stuff like this, I use <a href="http://www.texotela.co.uk/code/jquery/select/" rel="nofollow noreferrer" title="jQuery Select box plugin">texotela's select box plugin</a> with its simple ajaxAddOption function.</p>
| 12,026 |
<p>There are three assembly version attributes. What are differences? Is it ok if I use <code>AssemblyVersion</code> and ignore the rest?</p>
<hr>
<p>MSDN says:</p>
<ul>
<li><p><a href="https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assemblyversionattribute" rel="noreferrer">AssemblyVersion</a>:</p>
<blockquote>
<p>Specifies the version of the assembly being attributed. </p>
</blockquote></li>
<li><p><a href="https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assemblyfileversionattribute" rel="noreferrer">AssemblyFileVersion</a>:</p>
<blockquote>
<p>Instructs a compiler to use a specific version number for the Win32 file version resource. The Win32 file version is not required to be the same as the assembly's version number. </p>
</blockquote></li>
<li><p><a href="https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assemblyinformationalversionattribute" rel="noreferrer">AssemblyInformationalVersion</a>:</p>
<blockquote>
<p>Defines additional version information for an assembly manifest. </p>
</blockquote></li>
</ul>
<hr>
<p><em>This is a follow-up to <a href="https://stackoverflow.com/questions/62353/what-are-the-best-practices-for-using-assembly-attributes">What are the best practices for using Assembly Attributes?</a></em></p>
| <p><strong>AssemblyVersion</strong></p>
<p>Where other assemblies that reference your assembly will look. If this number changes, other assemblies must update their references to your assembly! Only update this version if it breaks backward compatibility. The <code>AssemblyVersion</code> is required.</p>
<p>I use the format: <em>major.minor</em> (and <em>major</em> for very stable codebases). This would result in:</p>
<pre><code>[assembly: AssemblyVersion("1.3")]
</code></pre>
<p>If you're following <a href="https://semver.org/" rel="noreferrer">SemVer</a> strictly then this means you only update when the <em>major</em> changes, so 1.0, 2.0, 3.0, etc.</p>
<p><strong>AssemblyFileVersion</strong></p>
<p>Used for deployment (like setup programs). You can increase this number for every deployment. Use it to mark assemblies that have the same <code>AssemblyVersion</code> but are generated from different builds and/or code.</p>
<p>In Windows, it can be viewed in the file properties.</p>
<p>The AssemblyFileVersion is optional. If not given, the AssemblyVersion is used.</p>
<p>I use the format: <em>major.minor.patch.build</em>, where I follow <a href="https://semver.org/" rel="noreferrer">SemVer</a> for the first three parts and use the buildnumber of the buildserver for the last part (0 for local build).
This would result in:</p>
<pre><code>[assembly: AssemblyFileVersion("1.3.2.42")]
</code></pre>
<p>Be aware that <a href="https://learn.microsoft.com/en-us/dotnet/api/system.version" rel="noreferrer">System.Version</a> names these parts as <code>major.minor.build.revision</code>!</p>
<p><strong>AssemblyInformationalVersion</strong></p>
<p>The Product version of the assembly. This is the version you would use when talking to customers or for display on your website. This version can be a string, like '<em>1.0 Release Candidate</em>'.</p>
<p>The <code>AssemblyInformationalVersion</code> is optional. If not given, the AssemblyFileVersion is used.</p>
<p>I use the format: <em>major.minor[.patch] [revision as string]</em>. This would result in:</p>
<pre><code>[assembly: AssemblyInformationalVersion("1.3 RC1")]
</code></pre>
| <p>When a assembly' s AssemblyVersion is changed,
If it has strong name, the referencing assemblies need to be recompiled, otherwise the assembly does not load!
If it does not have strong name, if not explicitly added to project file, it will not be copied to output directory when build so you may miss depending assemblies, especially after cleaning the output directory. </p>
| 9,018 |
<p>I have a scenario in which I'm going to need an arbitrary number of servers to provide the same SOAP web service. I would like to generate one set of proxy classes and be able to supply them with a location to point them at the different servers at runtime. Unfortunately, it looks as though the <code>wsdl:port</code> node (child of <code>wsdl:service</code>) requires the address of a specific server to be hardcoded. It appears that due to this the URL will be baked into my proxy classes. I know that I could potentially modify this by hand-editing the generated proxy classes, or modifying the code generation, but I'd really prefer not to resort to that. I feel like there's got to be a better way to solve this problem. I just want to decouple the interface definition from the location that the service will be residing at. I'm using VS2008 and C#.NET if that's of any help though best would be a language-agnostic (SOAP or WSDL specific) general solution to this problem.</p>
| <p>No, in .NET you can change the URL at runtime.</p>
<pre><code>Service svc = new Service ();
svc.url = "Value read from config. file or some such"
output = svc.method (input);
</code></pre>
| <p>When you add a web reference to your project, it places the address of the web service into the .config file of your application / web application. You can then simply change this setting in the config file to point to a different web service location, assuming of course that the services are identical.</p>
| 47,088 |
<p>"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." (Donald Knuth). My SQL tables are unlikely to contain more than a few thousand rows each (and those are the big ones!). SQL Server Database Engine Tuning Advisor dismisses the amount of data as irrelevant. So I shouldn't even think about putting explicit indexes on these tables. Correct?</p>
| <p>The value of indexes is in speeding reads. For instance, if you are doing lots of SELECTs based on a range of dates in a date column, it makes sense to put an index on that column. And of course, generally you add indexes on any column you're going to be JOINing on with any significant frequency. The efficiency gain is also related to the ratio of the size of your typical recordsets to the number of records (i.e. grabbing 20/2000 records benefits more from indexing than grabbing 90/100 records). A lookup on an unindexed column is essentially a linear search.</p>
<p>The cost of indexes comes on writes, because every INSERT also requires an internal insert to each column index.</p>
<p>So, the answer depends entirely on your application -- if it's something like a dynamic website where the number of reads can be 100x or 1000x the writes, and you're doing frequent, disparate lookups based on data columns, indexing may well be beneficial. But if writes greatly outnumber reads, then your tuning should focus on speeding those queries.</p>
<p>It takes very little time to identify and benchmark a handful of your app's most frequent operations both with and without indexes on the JOIN/WHERE columns, I suggest you do that. It's also smart to monitor your production app and identify the most expensive, and most frequent queries, and focus your optimization efforts on the intersection of those two sets of queries (which could mean indexes or something totally different, like allocating more or less memory for query or join caches).</p>
| <p>I guess there is an auto indexing on the primary key of the table which should be sufficient when querying on a table with less data.</p>
<p>So, yes explicit indexes can be avoided in case there is a small data set to be worked upon.</p>
| 31,507 |
<p>In writing the code that throws the exception I asked about <a href="https://stackoverflow.com/questions/259800/is-there-a-built-in-net-exception-that-indicates-an-illegal-object-state">here</a>, I came to the end of my message, and paused at the punctuation. I realized that nearly every exception message I've ever thrown probably has a ! somewhere.</p>
<pre><code>throw new InvalidOperationException("I'm not configured correctly!");
throw new ArgumentNullException("You passed a null!");
throw new StupidUserException("You can't divide by 0! What the hell were you THINKING??? DUMMY!!!!!");
</code></pre>
<p>What tone do you take when writing exception messages? When going through logs, do you find any certain style of message actually helps more than another?</p>
| <p>A conversational tone in system messages makes the software look unprofessional and sloppy. Exclamation points, insults, and slang don't really have a place in polished exception messages.</p>
<p>Also, I tend to use different styles in Java for runtime exceptions and checked exceptions, since runtime exceptions are addressed to the programmer that made the mistake. Since runtime exceptions might be displayed to end users, I still "keep it clean," but they can be a little more terse and cryptic. Checked exception messages should be more helpful, since it may be that the user can fix the problem if you describe it (e.g., file not found, disk full, no route to host, etc.).</p>
<p>One thing that is helpful, in the absence of a specific field on the exception for the information, is the offending data:</p>
<pre><code>throw new IndexOutOfBoundsException("offset < 0: " + off);
</code></pre>
| <p>I tend to work my exception messages into the exception themselves. E.g. a file_not_found should say "file not found". Specific data should only be included if the user can't figure it out; in this case, the user knows the filename, so I don't add that data. Formatting can be done by whatever outputs the information if necessary, so I try to make them as friendly to reformatting as possible.</p>
| 32,476 |
<p>I am new to PHP and trying to get the following code to work:</p>
<pre><code><?php
include 'config.php';
include 'opendb.php';
$query = "SELECT name, subject, message FROM contact";
$result = mysql_query($query);
while($row = mysql_fetch_array($result, MYSQL_ASSOC))
{
echo "Name :{$row['name']} <br>" .
"Subject : {$row['subject']} <br>" .
"Message : {$row['message']} <br><br>";
"ARTICLE_NO :{$row['ARTICLE_NO']} <br>" .
"ARTICLE_NAME :{$row['ARTICLE_NAME']} <br>" .
"SUBTITLE :{$row['SUBTITLE']} <br>" .
"CURRENT_BID :{$row['CURRENT_BID']} <br>" .
"START_PRICE :{$row['START_PRICE']} <br>" .
"BID_COUNT :{$row['BID_COUNT']} <br>" .
"QUANT_TOTAL :{$row['QUANT_TOTAL']} <br>" .
"QUANT_SOLD :{$row['QUANT_SOLD']} <br>" .
"STARTS :{$row['STARTS']} <br>" .
"ENDS :{$row['ENDS']} <br>" .
"ORIGIN_END :{$row['ORIGIN_END']} <br>" .
"SELLER_ID :{$row['SELLER_ID']} <br>" .
"BEST_BIDDER_ID :{$row['BEST_BIDDER_ID']} <br>" .
"FINISHED :{$row['FINISHED']} <br>" .
"WATCH :{$row['WATCH']} <br>" .
"BUYITNOW_PRICE :{$row['BUYITNOW_PRICE']} <br>" .
"PIC_URL :{$row['PIC_URL']} <br>" .
"PRIVATE_AUCTION :{$row['PRIVATE_AUCTION']} <br>" .
"AUCTION_TYPE :{$row['AUCTION_TYPE']} <br>" .
"INSERT_DATE :{$row['INSERT_DATE']} <br>" .
"UPDATE_DATE :{$row['UPDATE_DATE']} <br>" .
"CAT_1_ID :{$row['CAT_1_ID']} <br>" .
"CAT_2_ID :{$row['CAT_2_ID']} <br>" .
"ARTICLE_DESC :{$row['ARTICLE_DESC']} <br>" .
"DESC_TEXTONLY :{$row['DESC_TEXTONLY']} <br>" .
"COUNTRYCODE :{$row['COUNTRYCODE']} <br>" .
"LOCATION :{$row['LOCATION']} <br>" .
"CONDITIONS :{$row['CONDITIONS']} <br>" .
"REVISED :{$row['REVISED']} <br>" .
"PAYPAL_ACCEPT :{$row['PAYPAL_ACCEPT']} <br>" .
"PRE_TERMINATED :{$row['PRE_TERMINATED']} <br>" .
"SHIPPING_TO :{$row['SHIPPING_TO']} <br>" .
"FEE_INSERTION :{$row['FEE_INSERTION']} <br>" .
"FEE_FINAL :{$row['FEE_FINAL']} <br>" .
"FEE_LISTING :{$row['FEE_LISTING']} <br>" .
"PIC_XXL :{$row['PIC_XXL']} <br>" .
"PIC_DIASHOW :{$row['PIC_DIASHOW']} <br>" .
"PIC_COUNT :{$row['PIC_COUNT']} <br>" .
"ITEM_SITE_ID :{$row['ITEM_SITE_ID']};
}
include 'closedb.php';
?>
</code></pre>
<p>However I get this error:</p>
<pre><code>Parse error: syntax error, unexpected $end in C:\Programme\EasyPHP 2.0b1\www\test.php on line 56
</code></pre>
<p>I would also like to know if there is perhaps an easier way to obtain mysql records instead of typing by hand?</p>
<p>edit:</p>
<p>I fixed the semicolon and quote issue, and now get:</p>
<pre><code>Parse error: syntax error, unexpected T_STRING in C:\Programme\EasyPHP 2.0b1\www\test.php on line 51
</code></pre>
<p>I am sorry I don't know how to make colors in the code.</p>
| <p><strong>Edit</strong></p>
<p>You say that you're still getting an error. Did you remember to add a <strong>.</strong> when you removed that extra semi-colon?</p>
<hr>
<p>You have a semi-colon in the middle of your string, two lines after the echo.</p>
<p><img src="https://farm4.static.flickr.com/3049/2989189590_754c627f5d.jpg?v=0"></p>
<p>Also, the end of the string is missing a double-quote.</p>
<p><img src="https://farm4.static.flickr.com/3151/2988333441_7e6705715d.jpg?v=0"></p>
<hr>
<p>As far as a cleaner way to output all the values goes, you can loop over the result array like this:</p>
<pre><code>while($row = mysql_fetch_array($result, MYSQL_ASSOC))
{
foreach($row as $field=>$value)
{
echo "$field: {$value} <br />";
}
}
</code></pre>
| <p><strong>Edit:</strong>
Your SQL query only returns 3 fields. But you try to access a lot more than that. Try "SELECT *" if you want to return all the fields of a table. Otherwise make sure you select the fields you try to read (probably not why you get an error though, just an observation).</p>
<p>The syntax coloring of </p>
<pre><code>include 'closedb.php';
?>
</code></pre>
<p>indicates that the string lacks a closing quote. The line</p>
<pre><code>"ITEM_SITE_ID :{$row['ITEM_SITE_ID']};
</code></pre>
<p>confirms that.</p>
<p>Also the line</p>
<pre><code>"Message : {$row['message']} <br><br>";
</code></pre>
<p>ends the string concatenation. The semi-colon should probably be a period.</p>
| 31,573 |
<p>I need to store of 100-200 data in mysql, the data which would be separated by pipes..</p>
<p>any idea how to store it on mysql? should I use a single column or should I make many multiple columns? I don't know exactly how many data users will input. </p>
<p>I made a form, it halted at the part where multiple data needs to be stored.</p>
<p>Anyone know how to store multiple data in single column or is there any alternative way?</p>
<p>please help me..</p>
<p>thank you very much</p>
| <p>You should implement your table with an ID for the source of the data. This ID will be used to group all those pieces of similar data so you don't need to know how many you have beforehand.</p>
<p>Your table columns and data could be set up like this:</p>
<pre><code>sourceID data
-------- ----
1 100
1 200
1 300
2 100
3 100
3 200
</code></pre>
<p>When you query the database, you can just pull in all of the data with the same sourceID. With the data above, the following query would return two pieces of data.</p>
<pre><code>SELECT data
FROM dataTable
WHERE sourceID = 3
</code></pre>
<p>If you have multiple tables, you'll need to associate them with each other using <code>JOIN</code> syntax. Say you have a main table with user data and you want to associate all of this input data with each user.</p>
<pre><code>userID userName otherData
------ -------- ---------
1 Bob xyz
2 Jim abc
3 Sue lmnop
</code></pre>
<p>If you want to join data from this table (userTable) with data from the dataTable, use a query like this:</p>
<pre><code>SELECT userID, userName, data, otherData
FROM userTable
LEFT JOIN dataTable
ON userTable.userID = dataTable.sourceID
WHERE userTable.userID = 1
</code></pre>
<p>This query will give you all of the data for the user with an ID of 1. This assumes that the sourceID in your data table is using the userID from the user table to keep track of who the extra data belongs to.</p>
<p>Note that this is not the only JOIN syntax in SQL. You can learn about other types of joins <a href="http://www.keithjbrown.co.uk/vworks/mysql/mysql_p5.php" rel="nofollow noreferrer">here</a>.</p>
| <p>If you have a form where this data is coming from, store each input from your form into it's own separate column.</p>
<p>Look for relationships in your data: sounds like you have a "has many" relationship which indicates you may want a linking table where you could do a simple join query...</p>
<p>Storing multiple data in a single column will be a nightmare for queries and updates, unless you're storing XML, event then it would give me nightmares...</p>
| 37,338 |
<p>Is there a standard X / Gnome program that will display the X,Y width and depth in pixels of a window that I select? Something similar to the way an xterm shows you the width and depth of the window (in lines) as you resize it.</p>
<p>I'm running on Red Hat Enterprise Linux 4.4.</p>
<p>Thanks!</p>
| <p>Yes, you're looking for the program 'xwininfo'. Run it in another terminal and then click on the window you want info about and it will give it to you. </p>
<p>Hope this helps! </p>
| <pre><code>$ xwininfo
xwininfo: Please select the window about which you
would like information by clicking the
mouse in that window.
xwininfo: Window id: 0x1200007 "xeyes"
Absolute upper-left X: 1130
Absolute upper-left Y: 0
Relative upper-left X: 0
Relative upper-left Y: 0
Width: 150
Height: 100
Depth: 24
Visual Class: TrueColor
Border width: 0
Class: InputOutput
Colormap: 0x20 (installed)
Bit Gravity State: NorthWestGravity
Window Gravity State: NorthWestGravity
Backing Store State: NotUseful
Save Under State: no
Map State: IsViewable
Override Redirect State: no
Corners: +1130+0 -0+0 -0-924 +1130-924
-geometry 150x100-0+0
</code></pre>
| 9,876 |
<p>I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?</p>
| <p>The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping. </p>
<p><strong>What is mipmapping?</strong></p>
<p>Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images. </p>
<p>So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1. </p>
<p>If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.</p>
<p>Some hardware also requires a square image but this isn't very common anymore.</p>
<p><strong>Why do you need mipmapping?</strong></p>
<p>Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.</p>
<p>Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.</p>
<p>Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.</p>
<p>With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.</p>
<p><a href="http://en.wikipedia.org/wiki/Mipmap" rel="noreferrer">http://en.wikipedia.org/wiki/Mipmap</a></p>
<p>Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.</p>
<p>In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.</p>
| <p>Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.</p>
<p>In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.</p>
<p>I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.</p>
<p><strong>Edit:</strong> The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.</p>
| 26,378 |
<p>for the xml </p>
<pre><code><grandparent>
<parent1>
<child>data1</child>
</parent1>
<parent2>
<child>data2</child>
</parent2>
</grandparent>
</code></pre>
<p>I need the list containing tuples of parent,data for each parent in xml.</p>
<p>Is there a way to do it USING cElementTree? I am able to do it for child,data, but unfortunately child is identical in all the values, hence it is of not much use.</p>
| <p>It seems you can get access to the parent from the child using version 1.3 of ElementTree (check <a href="http://effbot.org/zone/element-xpath.htm" rel="nofollow noreferrer">http://effbot.org/zone/element-xpath.htm</a>), by using xpath commands like <code>child.find('../parent')</code>. But I think python ships with version 1.2 or something.</p>
<p>You should also check for lxml which is compatible with etree and has full Xpath support <a href="http://lxml.de/" rel="nofollow noreferrer">http://lxml.de/</a></p>
| <p>This syntax seemed to work for cElementTree </p>
<pre><code>ET.fromstring("<c><a><b></b></a></c>").find('.//b/..')
</code></pre>
<p>No going to base parent, and using double slash then single slash in path.<br>
(would have posted as a comment to above thread but it seems I have no privilege to)</p>
| 48,942 |
<p>As someone who hasn't used either technology on real-world projects I wonder if anyone knows how these two complement each other and how much their functionalities overlap?</p>
| <p>LINQ to SQL forces you to use the table-per-class pattern. The benefits of using this pattern are that it's quick and easy to implement and it takes very little effort to get your domain running based on an existing database structure. For simple applications, this is perfectly acceptable (and oftentimes even preferable), but for more complex applications devs will often suggest using a <a href="http://en.wikipedia.org/wiki/Domain_driven_design" rel="noreferrer">domain driven design</a> pattern instead (which is what NHibernate facilitates).</p>
<p>The problem with the table-per-class pattern is that your database structure has a direct influence over your domain design. For instance, let's say you have a Customers table with the following columns to hold a customer's primary address information:</p>
<ul>
<li>StreetAddress</li>
<li>City</li>
<li>State</li>
<li>Zip</li>
</ul>
<p>Now, let's say you want to add columns for the customer's mailing address as well so you add in the following columns to the Customers table:</p>
<ul>
<li>MailingStreetAddress</li>
<li>MailingCity</li>
<li>MailingState</li>
<li>MailingZip</li>
</ul>
<p>Using LINQ to SQL, the Customer object in your domain would now have properties for each of these eight columns. But if you were following a domain driven design pattern, you would probably have created an Address class and had your Customer class hold two Address properties, one for the mailing address and one for their current address.</p>
<p>That's a simple example, but it demonstrates how the table-per-class pattern can lead to a somewhat smelly domain. In the end, it's up to you. Again, for simple apps that just need basic CRUD (create, read, update, delete) functionality, LINQ to SQL is ideal because of simplicity. But personally I like using NHibernate because it facilitates a cleaner domain.</p>
<p>Edit: @lomaxx - Yes, the example I used was simplistic and could have been optimized to work well with LINQ to SQL. I wanted to keep it as basic as possible to drive home the point. The point remains though that there are several scenarios where having your database structure determine your domain structure would be a bad idea, or at least lead to suboptimal OO design.</p>
| <p>Or you could use the Castle ActiveRecords project. I've been using that for a short time to ramp up some new code for a legacy project. It uses NHibernate and works on the active record pattern (surprising given its name I know). I haven't tried, but I assume that once you've used it, if you feel the need to drop to NHibernate support directly, it wouldn't be too much to do so for part or all of your project. </p>
| 4,661 |
<p>The located assembly's manifest definition does not match the assembly reference</p>
<p>getting this when running nunit through ncover. Any idea?</p>
| <p>This is a mismatch between assemblies: a DLL referenced from an assembly doesn't have a method signature that's expected.</p>
<p>Clean the solution, rebuild everything, and try again. </p>
<p>Also, be careful if this is a reference to something that's in the GAC; it could be that something somewhere is pointing to an incorrect version. Make sure (through the Properties of each reference) that the correct version is chosen or that Specific Version is set false.</p>
| <p>If you got this error trying to add a component to Visual Studio,- <code>Microsoft.VisualStudio.TemplateWizardInterface</code> - (after trying to install weird development tools)</p>
<p>consider this solution(courtesy of larocha (thanks, whoever you are)):</p>
<ol>
<li>Open C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config in a text editor </li>
<li>Find this string: "<code>Microsoft.VisualStudio.TemplateWizardInterfac</code>e" </li>
<li>Comment out the element so it looks like this: </li>
</ol>
<p><code><dependentAssembly><br>
<!-- assemblyIdentity name="Microsoft.VisualStudio.TemplateWizardInterface" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" / --><br>
<bindingRedirect oldVersion="0.0.0.0-8.9.9.9" newVersion="9.0.0.0" /><br>
</dependentAssembly></code></p>
<p>source: <a href="http://webclientguidance.codeplex.com/workitem/15444" rel="nofollow">http://webclientguidance.codeplex.com/workitem/15444</a></p>
| 11,951 |
<p>SVN merging is very time consuming (especially on large code bases) and error prone. Eclipse and Netbeans plugins don't serve me very well. At the moment I'm using a combination of manual "svn merge" and the graphical diff <a href="http://meld.sourceforge.net/" rel="noreferrer">Meld</a>. Are there better alternatives (non-win32)?</p>
| <p>Meld is arguably one of the best and one of the most used tools for non-win32. On Windows or with Wine <a href="http://winmerge.org/" rel="noreferrer">WinMerge</a> wins hands down. Some alternatives for Linux:</p>
<ul>
<li><a href="http://www.caffeinated.me.uk/kompare/" rel="noreferrer">Kompare</a></li>
<li><a href="http://kdiff3.sourceforge.net/" rel="noreferrer">KDiff3</a></li>
<li><a href="http://freshmeat.net/projects/dirdiff/" rel="noreferrer">DirDiff</a></li>
</ul>
<p>P.S. Actually on Windows the TortoiseSVN merge tool gives great results too :)</p>
| <p>Which platform are you running on? On Windows I use <strong>TortoiseSVN</strong> to carry out most diffs and merges pretty easily. Only some of the more complicated strands of diffs are performed with UltraCompare.</p>
| 26,819 |
<p>I'm trying to read data from a photocell resistor and my Arduino Decimila and then graph it in real-time with Processing.</p>
<p>Should be painfully simple; but its growing into a little bit of a nightmare for me.</p>
<p>code I'm running on my Arduino:</p>
<pre class="lang-java prettyprint-override"><code>int photoPin;
void setup(){
photoPin = 0;
Serial.begin( 9600 );
}
void loop(){
int val = int( map( analogRead( photoPin ), 0, 1023, 0, 254 ) );
Serial.println( val ); //sending data over Serial
}
</code></pre>
<p>code I'm running in Processing: </p>
<pre class="lang-java prettyprint-override"><code>import processing.serial.*;
Serial photocell;
int[] yvals;
void setup(){
size( 300, 150 );
photocell = new Serial( this, Serial.list()[0], 9600 );
photocell.bufferUntil( 10 );
yvals = new int[width];
}
void draw(){
background( 0 );
for( int i = 1; i < width; i++ ){
yvals[i - 1] = yvals[i];
}
if( photocell.available() > 0 ){
yvals[width - 1] = photocell.read();
}
for( int i = 1; i < width; i++ ){
stroke( #ff0000 );
line( i, yvals[i], i, height );
}
println( photocell.read() ); // for debugging
}
</code></pre>
<p>I've tested both bits of code separately and I know that they work. It's only when I try to have the input from the Arduino going to Processing that the problems start.</p>
<p>When I view the data in Arduino's "Serial Monitor", I get a nice constant flow of data that seems to look valid.</p>
<p>But when I read that same data through Processing, I get a repeating pattern of random values.</p>
<p>Halp?</p>
| <p>You could transmit that data with the Plotly Arduino API, which along with the documentation and setup is available <a href="http://plot.ly/api/arduino" rel="nofollow noreferrer">here</a>. Basic idea: you can continuously stream data from your Arduino, or transmit a single chunk. </p>
<p>Then, if you want to embed it into a site, you'll want to grab the URL and use this snippet:</p>
<pre><code><iframe id="igraph" src="https://plot.ly/~abhishek.mitra.963/1/400/250/" width="400" height="250" seamless="seamless" scrolling="no"></iframe>
</code></pre>
<p>You can change the width/height dimensions in that snippet. Note: you need to swap in your own URL there to get it stream through.</p>
<p><a href="http://plot.ly/~flann321/9/" rel="nofollow noreferrer">Here's an example of how it looks to stream Arduino data</a></p>
<p><img src="https://i.stack.imgur.com/5QRMO.png" alt="enter image description here"></p>
<p>Full disclosure: I work for Plotly. </p>
| <p>After a closer look at the resources at hand, I realized that the problem had already been solved for me by the folks over at <a href="http://arduino.cc" rel="nofollow noreferrer">http://arduino.cc</a></p>
<blockquote>
<p><a href="http://arduino.cc/en/Tutorial/Graph" rel="nofollow noreferrer">http://arduino.cc/en/Tutorial/Graph</a></p>
</blockquote>
<p>Oh how much time I could have saved if I had seen that earlier.</p>
| 31,973 |
<p>I am encountering error like </p>
<pre><code>test(10)[1].csv file cannot be found at
C:\Documents and Settings\Ron\Local Settings\Temporary Internet Files\Content.IE5\PQ0STUVW
</code></pre>
<p>When trying to do export of CSV file using the following codes.
Anyone have any idea what could be wrong? This issue does not occur in IE7 / Firefox and is only specific to IE6. </p>
<pre><code>response.setContentType("application/vnd.ms-excel");
response.setHeader("Content-Disposition", "attachment;filename=" + fileFullName);
</code></pre>
| <p>I've already got that problem and asked the <a href="https://stackoverflow.com/questions/120497/how-to-work-around-the-1-ie-bug-while-saving-an-excel-file-from-a-web-server">question that as a correct answer</a> (or, shall I say, a good workaround for IE).</p>
| <p>Are there "special" characters in your "fileFullName"? You cold also try to check the length of your file name, I once had an issue with IE6 when the file name got too long.</p>
<p>Just to be on the safe side: your "fileFullName" only contains the name of the file and not the path, right? </p>
| 23,028 |
<p>I'm trying to build my first generic list and have run into some problems. I understand the declaration looks like " <code>List<T></code> ", and I have <code>using System.Collections.Generic;</code> at the top of my page. However, Visual Studio doesn't recognize the <code>T</code> variable. </p>
<p>What am I missing?</p>
| <p><code>List<T></code> means <code>List<WhateverTypeYouWantItToBeAListOf></code>. So for example:</p>
<p>If I have an Employee Class, and I wanted a collection of Employees, I could say:</p>
<pre><code>List<Employee> employeeList = new List<Employee>();
</code></pre>
<p>I could then add <code>Employee</code> Objects to that list, and have it be Type-safe and extend to however many employee objects I put in it.</p>
<p>Like so:</p>
<pre><code>Employee emp1 = new Employee();
Employee emp2 = new Employee();
employeeList.Add(emp1);
employeeList.Add(emp2);
</code></pre>
<p><code>employeeList</code> now holds <code>emp1</code> and <code>emp2</code> as objects.</p>
<p>There are several facets to generic collections, the most important being they provide an object independent way of having a... well... collection of objects. They are type-safe; which means that any collection will consist of one type of object. You won't have a <code>Animal</code> instance inside of <code>List<Employee></code> (unless, of course, Employee is a base class that <code>Animal</code> inherits from. At that point, however, you have bigger problems.</p>
<p>Programming with Generics is its own topic, worthy of (at least) one chapter in a book. At a very high level, programming with generics provides another way to reuse code -- independent of any class hierarchy or implementation details of a specific class.</p>
<p>More information <a href="http://missingmanuals.com/pub/a/dotnet/2005/06/20/generics.html" rel="noreferrer">here</a>.</p>
| <p>Are you trying to use the List class or are you trying to build your own? What does your code look like at the moment?</p>
| 47,156 |
<p>In my MFC program I am using a splitter to create two panes. I now want to split one of these panes in half again and put in another view, can someone talk me through how to do it or point me in the direction of some code?</p>
<p>I would prefer to code it myself so I am not interested in custom derived classes unless they are extremely basic.</p>
<p>Thanks!</p>
| <p>In <code>CMainFrame::OnCreateClient</code></p>
<pre><code>// Create splitter with 2 rows and 1 col
m_wndSplitter.CreateStatic(this, 2, 1);
// Create a view in the top row
m_wndSplitter.CreateView(0, 0, RUNTIME_CLASS(CView1), CSize(100, 100), pContext);
// Create a 2 column splitter that will go in the bottom row of the first
m_wndSplitter2.CreateStatic(&m_wndSplitter, 1, 2, WS_CHILD|WS_VISIBLE, m_wndSplitter.IdFromRowCol(1, 0));
// Create views for the bottom splitter
m_wndSplitter2.CreateView(0, 0, RUNTIME_CLASS(CView2), CSize(100, 100), pContext);
m_wndSplitter2.CreateView(0, 1, RUNTIME_CLASS(CView3), CSize(100, 100), pContext);
...
</code></pre>
| <p>I am not an expert in MFC, but can't you just put a splitter in one of the panes you made with the first splitter ? that how we do in winform....</p>
| 25,047 |
<p>I am having a lot of trouble with mod_rewrite, and for a while I had it going, but for some reason everything has stopped working.</p>
<p>Is there any sort of basic test that I can do to make sure that it is not something fundamental broken.</p>
<p>It does show up in phpinfo();</p>
| <p>A secret that nobody mentions is that mod_rewrite is confusing partly beacuse it's <em>buggy</em>.</p>
<p>Once you're sure you understand it, it does something strange and you get depressed and vow never to touch it again. Earlier this year I found a bug which was <a href="http://archive.apache.org/gnats/7879" rel="nofollow noreferrer">described in 2001</a>. That's right, <em>2001</em>. There's a <a href="https://issues.apache.org/bugzilla/show_bug.cgi?id=38642" rel="nofollow noreferrer">bugzilla entry for it</a> dating 2006. And a couple of duplicates. The bug is easy to reproduce, yet it still hasn't been fixed.</p>
<p>There's even a patch for it but it hasn't been merged into the code.</p>
<p>Of course, mod_rewrite being mod_rewrite, there's a good chance that there is a logical, simple explanation to what's happening. Code and examples might be helpful.</p>
| <p>Use an .htaccess file to create some rules. If they don't work then something is broken :)</p>
| 30,930 |
<p><em>Disclaimer: This is not actually a programming question, but I feel the audience on stackoverflow is more likely to have an answer than most question/answer sites out there.</em></p>
<p>Please forgive me, Joel, for stealing your question. Joel asked this question on a podcast a while back but I don't think it ever got resolved. I'm in the same situation, so I'm also looking for the answer. </p>
<p>I have multiple devices that all sync with MS-Outlook. PCs, Laptops, Smartphones, PDAs, etc. all have the capability to synchronize their data (calendars, emails, contacts, etc.) with the Exchange server. I like to use the Outlook meeting notice or appointment reminders to remind me of an upcoming meeting or doctors appointment or whatever. The problem lies in the fact that all the devices pop up the same reminder and I have to go to every single device individually in order to snooze or dismiss all of the identical the reminder popups. </p>
<p>Since this is a sync'ing technology, why doesn't the fact that I snooze or dismiss on one device sync up the other devices automatically. I've even tried to force a sync after dismissing a reminder and it still shows up on my other devices after a forced sync. This is utterly annoying to me. </p>
<p>Is there a setting that I'm overlooking or is there a 3rd party reminder utility that I should be using instead of the built-in stuff?</p>
<p>Thanks,
Kurt</p>
| <p>At least for PCs, the fact that you dismiss an item does get sync'd, and fairly quickly for me. I'm not sure why phones don't seem to do it, though. Maybe the ActiveSync protocol doesn't offer that option.</p>
| <p>Thanks from me, too :)</p>
<p>Maybe it's because all your devices clocks are synchronized to a time server, so they all have the exactly correct atomic-clock time, and all the devices notify you within a couple of seconds of each other, so the "dismiss" synchronization just doesn't happen fast enough.</p>
| 3,443 |
<p>I had a new Extruder tip on my Ender 3 3D printer. the tip looked like the left tip in the below image. After I have been using it for about 5 months, the tip got dull/flat, like the tip on the right in the below image.</p>
<p>The only filament I have used is a spool of PLA (from hatchbox) and a spool of PETG (from sain-smart)</p>
<h2>About the Filament</h2>
<p>From the time that I replaced the tip, to now, i have only used my 1 spool of <a href="https://rads.stackoverflow.com/amzn/click/com/B00J0ECR5I" rel="nofollow noreferrer" rel="nofollow noreferrer">PLA filament</a>.</p>
<p>I don't believe it has any carbon-fiber in it, the only other things I can think of, are that the filament has a tough time sticking to the bed, so I have to print pretty close to the bed.</p>
<p><strong>Image of my 3D prints using my PLA filament</strong>
<a href="https://i.stack.imgur.com/ntcad.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ntcad.jpg" alt="enter image description here" /></a></p>
<p><strong>Image of my PLA filament</strong>
<a href="https://i.stack.imgur.com/O3PGS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O3PGS.jpg" alt="enter image description here" /></a></p>
<hr />
<p>I don't 3D print a terribly large amount, Is it normal to have to be replacing the pen this often?</p>
<p>How do I prevent my extruder tip from getting dull so soon? Is there a way to prevent the pen tip from getting dull at all?</p>
<p><strong>Actual Images:</strong> (Sorry for all the edits, I’m trying to add the images on my phone and it’s not working)
<img src="https://i.stack.imgur.com/lP0Vu.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/pboTZ.jpg" alt="enter image description here" /></p>
| <p>Playing around with the nozzle height will help: back it off until just before you have first layer adhesion issues. Don't jam the filament into the bed as you might for ABS. This helps with small prints. However, my experience has been that if you have a large enough continuous contact area (i.e. more than a few square inches) with the print bed, there will be problems getting the print off. So I still use painters tape (in case I have to rip the print off with force) and glue sticks (so that I don't often need to) on my aluminum print bed as I've found that makes it much easier to deal with without damaging either the bed or the print.</p>
<p>You can also try dialing back the heated bed temperature a bit (I think I've got mine set to 70-75c for PETG) but that also doesn't eliminate the issue with larger prints. Also, if I lowered it too much I had problems with first layer adhesion on any size print.</p>
<p>I also have a glass plate that I use for ABS, which I don't use with PETG. I've read too many accounts of it sticking too well to glass as well (to the point of the plate being destroyed) and didn't want to try using the amount of force on it that I sometimes have to when removing a PETG print. I also considered trying BuildTak but read accounts of similar issues with it and PETG. So I stayed with what's been working for me: tape and glue sticks.</p>
| <p>Correctly level your bed. Seriously, that's the answer. PETG does stick well, but it only gets difficult to remove if you're smashing the first layer against the bed with a nozzle that's way too close. With the bed leveled properly - using feeler gauges or test prints and a sub-0.1-mm-precision caliper - I have no trouble taking PETG prints off a buildtak-clone bed. Glass should be easier.</p>
<p>If you already have PETG stuck to a build surface you care about and don't want to risk destroying it, try heat, or alternating heat and cold.</p>
| 1,778 |
<p>I have a workspace for running an H.263 Video Encoder in a loop for 31 times i.e. the main is executed 31 times to generate 31 different encoded bit streams. This MS Visual Studio 2005 Workspace has all C source files. When i create a "DEBUG" configuration for the workspace and build and execute it, it runs fine, i.e. it generates all the 31 output files as expected.
But when I set the configuration of the workspace to "RELEASE" mdoe, and repeat the process, the encoder crashes at some test case run.</p>
<p>Now to debug this is verified following:</p>
<ol>
<li>Analyzed the code to see if there was any variable initialization being missed out in every run of the encoder </li>
<li>Checked the various Workspace(Solution) options in both the modes (DEBUG and RELEASE). </li>
</ol>
<p>There are some obvious differences, but i turned the optimization related options explicitly same in both modes. </p>
<p>But still could not nail the problem and find a fix for that. Any pointers?</p>
<p>-Ajit.</p>
| <p>It's hard to say what the problem might be without carefully inspecting the code. However...</p>
<p>One of the differences between debug and release builds is how the function call stack frame is set up. There are certain classes of bad things you can do (like calling a function with the wrong number of arguments) that are not fatal in a debug build but crash horribly in a release build. Perhaps you could try changing the stack frame related options (I forget what they're called, sorry) in the release build to the same as the debug build and see whether that helps.</p>
<p>Another thing might be to enable all the warnings you possibly can, and fix them all.</p>
| <p>Are you sure there are no precompile directives that, say, ignores some really important code in Release mode but allows them in Debug?</p>
<p>Also, have you implemented any logging that might point out to the precise assembly that's throwing the error?</p>
| 3,005 |
<p>Using Lucene, one can retrieve the terms contained within in an index, i.e. the unique, stemmed words, excluding stop-words, that documents in the index contain. This is useful for generating autocomplete suggestions amongst other things. Is something similar possible with MS SQL Server full text indices?</p>
| <p>You can use the new system view in SQL Server 2008 to get you the terms and count of occurrences, is this what you want?</p>
<pre><code>sys.dm_fts_index_keywords_by_document
(
DB_ID('database_name'),
OBJECT_ID('table_name')
)
</code></pre>
<p>You need to supply the <code>db_id</code> and <code>object_id</code> of the fulltext table. This is the MSDN link for this: <a href="http://msdn.microsoft.com/en-us/library/cc280607.aspx" rel="nofollow noreferrer">sys.dm_fts_index_keywords_by_document</a>.</p>
| <p><code>sys.dm_fts_index_keywords</code> gives returns the list of indexed terms, with additional statistics, not the list of noise words which is retruned thanks to <code>sys.fulltext_stopwords</code>.</p>
| 36,521 |
<p>Can I setup a custom MIME type through ASP.NET or some .NET code? I need to register the Silverlight XAML and XAP MIME types in IIS 6.</p>
| <p>To add to the master mime type list:</p>
<pre><code>using (DirectoryEntry mimeMap = new DirectoryEntry("IIS://Localhost/MimeMap"))
{
PropertyValueCollection propValues = mimeMap.Properties["MimeMap"];
IISOle.MimeMapClass newMimeType = new IISOle.MimeMapClass();
newMimeType.Extension = extension; // string - .xap
newMimeType.MimeType = mimeType; // string - application/x-silverlight-app
propValues.Add(newMimeType);
mimeMap.CommitChanges();
}
</code></pre>
<p>Add a reference to :</p>
<p>'System.DirectoryServices' on the .NET add references tab<br>
'Active DS IIS Namespace Provider' on the COM add references tab.</p>
<p>To configure a mime type for a specific site, change ..</p>
<p><code>'IIS://Localhost/MimeMap'</code> </p>
<p>to </p>
<p><code>'IIS://Localhost/W3SVC/[iisnumber]/root'</code> </p>
<p>...replacing <code>'[iisnumber]'</code> with the IISNumber of the website.</p>
| <p>'Active DS IIS Namespace Provider' on the COM add references tab.</p>
<p>If it's not there, you have to install IIS on your machine.</p>
<p>See <a href="https://stackoverflow.com/questions/1191811/is-there-a-way-to-get-all-the-mime-types-instead-of-wrinting-a-huge-case-statemen/1767242#1767242">Is there a way to get ALL the MIME types instead of wrinting a huge case statement?</a></p>
| 29,030 |
<p>I'd like to make sure that a thread is moved to a specific CPU core and can never be moved from it by the scheduler.</p>
<p>There's a <code>SetThreadAffinityMask()</code> call but there's no <code>GetThreadAffinityMask()</code>.</p>
<p>The reason I need this is because high resolution timers will get messed up if the scheduler moves that thread to another CPU.</p>
| <p>You should probably just use SetThreadAffinityMask and trust that it is working.</p>
<p><a href="http://msdn.microsoft.com/en-us/library/ms684251.aspx" rel="noreferrer">MSDN</a></p>
| <p>There is no need for Get<i>Thread</i>AffinityMask. Just get the value of Get<i>Process</i>AffinityMask, turn some bits off, then call SetThreadAffinityMask. The threads inherit the process' affinity mask, and since their affinity is under your control, you already know a thread's affinity mask (it's the one you set it to).</p>
| 19,827 |
<p>I need a distinct sound to play when a error occurs. The error is the result of a problem with one of perhaps two hundred barcodes that are being inputted in rapid fire. The event queue seems to handle keyboard input (which the barcode scanner emulates) first, and playing of my sound second. So if the barcodes are scanned quickly, the error sound stays in the queue, being bumped by the next scan.</p>
<p>Can I manipulate the priority of the queue?</p>
| <p>Javascript is not multi-threaded, so option 2 won't work. And I suspect that the event queue you speak of is the OS's event queue, which isn't likely to accessible from a browser, if at all.</p>
<p>Beyond that, I'm having a little trouble understanding your problem. What is firing the error sound? Is it a <code>keyup</code>, etc event or is it a server side validation that returns an error code or something else?</p>
| <p>What if you try to delay the scans (using <a href="http://www.sean.co.uk/a/webdesign/javascriptdelay.shtm" rel="nofollow noreferrer"><code>setTimeout()</code></a>), allowing the sounds to start and finish between them?</p>
| 35,684 |
<p>this is my log output</p>
<pre><code>INFO main digestemails - process inbox
INFO main digestemails - checking for emails in c:\development\DCMail\email\KN-Source
INFO main digestemails - digesting [email protected]
INFO main digestemails - extracting attachments
INFO main digestemails - no attachments or no attachments supported
INFO main digestemails - updating database
INFO main digestemails - email -> COMPLETED folder
INFO main digestemails -
</code></pre>
<p>I would like a time stamp per log message ie</p>
<pre><code>INFO 2008-12-25 13:14:00 digestemails - email -> COMPLETED folder
</code></pre>
<p>here is my log4j config file</p>
<pre><code>log4j.rootLogger=debug, stdout, R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=c:\\development\\DCMail\\logs\\digestlogfolder\\digest-logfile.log
log4j.appender.R.MaxFileSize=100KB
# Keep one backup file
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
</code></pre>
<p>How do I do it?</p>
| <p>Use <code>%d</code> in your PatternLayout.</p>
<p>Also <code>%d</code> can take a format pattern as in <code>%d{dd MMM yyyy HH:mm:ss,SSS}</code> you can pick and choose the elements that you want. When the format pattern is omitted the date will be in ISO8601 format.</p>
| <p>You can find more conversion characters usage in log4j javadoc.For example, at <a href="http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html" rel="nofollow noreferrer">http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html</a>.</p>
| 46,714 |
<p>What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new.</p>
<p>--Edit--
For more detail: I'm trying to scrape the CNN <a href="http://www.cnn.com/ELECTION/2008/primaries/results/state/" rel="noreferrer" title="primary database">primary database</a>. There is a wealth of information there, but there doesn't appear to be an api.</p>
| <p>This is a difficult problem because you either have to reverse engineer the javascript on a per-site basis, or implement a javascript engine and run the scripts (which has its own difficulties and pitfalls).</p>
<p>It's a heavy weight solution, but I've seen people doing this with greasemonkey scripts - allow Firefox to render everything and run the javascript, and then scrape the elements. You can even initiate user actions on the page if needed.</p>
<p>-Adam</p>
| <p>This seems like it's a pretty common problem. I wonder why someone hasn't anyone developed a programmatic browser? I'm envisioning a Firefox you can call from the command line with a URL as an argument and it will load the page, run all of the initial page load JS events and save the resulting file.</p>
<p>I mean Firefox, and other browsers already do this, why can't we simply strip off the UI stuff? </p>
| 25,349 |
<p>Are there any noted differences in appearance rendering of HTML and XHTML in Google Chrome from Firefox? From IE? From other browsers? What browser does it render the code the most similar to?</p>
| <p>Since it's based on WebKit, its rendering will most closely resemble Safari and Konqueror.</p>
| <p>There are <a href="http://www.flickr.com/photos/kurafire/2822606444/" rel="nofollow noreferrer">anti-aliasing differences</a> between Safari 3.1 and Google Chrome, for whatever that's worth. This will doubtless be because Safari on Windows uses its own text-rendering and anti-aliasing layer instead of Windows's GDI.</p>
| 9,528 |
<p>I have an SSIS Package that sets some variable data from a SQL Server Package Configuration Table. (Selecting the "Specify configuration setings directly" option)</p>
<p>This works well when I'm using the Database connection that I specified when developing the package. However when I run it on a server (64 bit) in the testing environment (either as an Agent job or running the package directly) and I Specify the new connection string in the Connection managers, the package still reads the settings from the DB server that I specified in development.</p>
<p>All the other Connections take up the correct connection strings, it only seems to be the Package Configuration that reads from the wrong place.</p>
<p>Any ideas or am I doing something really wrong?</p>
| <p>The only way I was able to do this was to use Windows Environment Variables. You can specify things like connection strings and user preferences in environment variables, and then pick up those environment variables from your SSIS Task.</p>
| <p>We want to keep our package configs in a database table, we know it gets backuped with our other data and we know where to find it. Just a preference.</p>
<p>I have found that to get this to work I can use an environment variable configuration to set the connection string of the connection manager that I am reading my package config from. (Although I had to restart the SQL Server agent before it could find the new environment variable. Not ideal when I deploy this to Production)</p>
<p>Looks Like when you run an SSIS package as a step in a scheduled task it works in this order:</p>
<ul>
<li>Load each of the Package Configs in the order they appear in the Package Configuations Organiser</li>
<li>Set the Connection Strings from the Data sources tab in the Job Step properties of the Scheduled Job</li>
<li>Start running package.</li>
</ul>
<p>I would have expected the first 2 to be the other way around so that I can set the data source for my package config from the scheduled job. That is where I would expect other people to look for it when maintaining the package.</p>
| 5,972 |
<p>Since version 1.5 Subversion supports to have a local caching-proxy for the main Master-repository. </p>
<p>I got the slave synced and the master replaying the commits to the slave.
Everything works fine so far, but now I am wondering how to do the authentication (working with <a href="http://blogs.open.collab.net/svn/2007/10/yesterday-at-th.html" rel="noreferrer">this</a> guide).</p>
<p>When both, the master and the slave, have authentication set, the slave asks for username/password on reads, but both ask on writes.</p>
<p>What is the way to also get authentication transparent to the user of the slave (meaning requiring only 1 authentication independent if it is read or write)?</p>
<p>I am testing with:</p>
<ul>
<li>Apache/2.2.3, Subversion 1.4.2 on the slave (Debian)</li>
<li>Apache/2.2.8, Subversion 1.5.1 (Ubuntu)</li>
</ul>
| <p>In the end the problem was solved by configuring the mod_proxy correctly.
Ones mod_proxy is aware that is also has to proxy the authentication credentials, it works fine and the user has to enter username/password only once.</p>
| <p>Remembering the password must surely be up to the svn client you're using, why would it ask you again if you told it to remember it?</p>
<p>Also you might want to read up on apache, specifically the Require directive, which controls HTTP authentication: <a href="http://httpd.apache.org/docs/2.2/mod/core.html#require" rel="nofollow noreferrer">http://httpd.apache.org/docs/2.2/mod/core.html#require</a></p>
<p>Usually <code>Require valid-user</code> is used</p>
| 32,227 |
<p>I am attempting to use the .Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication.</p>
<p>To perform the handshake, I am using this code:</p>
<pre><code>SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback);
sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false);
</code></pre>
<p>Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store.</p>
<p>I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store.</p>
<p>It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too.</p>
<p>Any ideas?</p>
| <p>It does not look like this is currently possible using the .NET libraries. </p>
<p>I solved it by using the Mono class library implementation of System.Security.SslStream, which gives better access to overriding the servers behavior during the handshake.</p>
| <p>It is not the validation part I want to change. The problem is in the initial handshake, the server transmits the message informing the client that client authentication is required (that is the CertificateRequest message). As part of this message, the server sends the names of CAs that it will accept as issuers of the client certificate. It is that list which per default contains all the Trusted Roots in the store.</p>
<p>But if is possible to override the certificate root store for a single application, that would probably fix the problem. Is that what you mean? And if so, how do I do that?</p>
| 7,741 |
<p>I'm pretty new to the Spring Framework, I've been playing around with it and putting a few samples apps together for the purposes of evaluating Spring MVC for use in an upcoming company project. So far I really like what I see in Spring MVC, seems very easy to use and encourages you to write classes that are very unit test-friendly.</p>
<p>Just as an exercise, I'm writing a main method for one of my sample/test projects. One thing I'm unclear about is the exact differences between <code>BeanFactory</code> and <code>ApplicationContext</code> - which is appropriate to use in which conditions?</p>
<p>I understand that <code>ApplicationContext</code> extends <code>BeanFactory</code>, but if I'm just writing a simple main method, do I need the extra functionality that <code>ApplicationContext</code> provides? And just exactly what kind of extra functionality does <code>ApplicationContext</code> provide?</p>
<p>In addition to answering "which should I use in a main() method", are there any standards or guidelines as far as which implementation I should use in such a scenario? Should my main() method be written to depend on the bean/application configuration to be in XML format - is that a safe assumption, or am I locking the user into something specific?</p>
<p>And does this answer change in a web environment - if any of my classes needed to be aware of Spring, are they more likely to need <code>ApplicationContext</code>?</p>
<p>Thanks for any help. I know a lot of these questions are probably answered in the reference manual, but I'm having a hard time finding a clear breakdown of these two interfaces and the pros/cons of each without reading thru the manual with a fine-tooth comb.</p>
| <p>The spring docs are great on this: <a href="http://static.springframework.org/spring/docs/2.5.x/reference/beans.html#context-introduction-ctx-vs-beanfactory" rel="noreferrer">3.8.1. BeanFactory or ApplicationContext?</a>.
They have a table with a comparison, I'll post a snippet:</p>
<p><strong>Bean Factory</strong></p>
<ul>
<li>Bean instantiation/wiring</li>
</ul>
<p><strong>Application Context</strong></p>
<ul>
<li>Bean instantiation/wiring</li>
<li>Automatic BeanPostProcessor registration</li>
<li>Automatic BeanFactoryPostProcessor registration</li>
<li>Convenient MessageSource access (for i18n)</li>
<li>ApplicationEvent publication</li>
</ul>
<p>So if you need any of the points presented on the Application Context side, you should use ApplicationContext.</p>
| <p>I think it is worth mentioning that since Spring 3, if you want to create a factory, you can also use the <a href="https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/Configuration.html" rel="nofollow noreferrer"><code>@configuration</code></a> annotation combined with the proper <a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/Scope.html" rel="nofollow noreferrer"><code>@scope</code></a></p>
<pre><code>@Configuration
public class MyFactory {
@Bean
@Scope("prototype")
public MyClass create() {
return new MyClass();
}
}
</code></pre>
<p>Your factory should be visible by Spring container using the <a href="https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/ComponentScan.html" rel="nofollow noreferrer"><code>@ComponentScan</code></a> annotation or xml configuration</p>
<p><a href="http://www.baeldung.com/spring-bean-scopes" rel="nofollow noreferrer">Spring bean scopes article from baeldung site</a></p>
| 30,155 |
<p>I'm getting something pretty strange going on when trying to read some data using the MySql .net connector. Here's the code:</p>
<pre><code>IDataReader reader = null;
using (MySqlConnection connection = new MySqlConnection(this.ConnectionString))
{
String getSearch = "select * from organization";
MySqlCommand cmd = new MySqlCommand(getSearch, connection);
cmd.CommandType = CommandType.Text;
connection.Open();
reader = cmd.ExecuteReader();
while (reader.Read())
{
// response write some stuff to the screen (snipped for brevity)
}
}
</code></pre>
<p>If I put a breakpoint after the ExecuteReader and expand the results view in Visual Studio (hovering over reader and expanding), I can see the rows returned by the query. If I then let that close and expand the results view again, I get the message 'Enumeration yielded no results'.</p>
<p>It seems as if the contents of the reader are getting reset as soon as they're viewed.</p>
<p>As for what we've tried:<br>
- the SQL runs fine directly on to DB<br>
- Binding the results of the query directly to a datagrid just returns an empty datagrid<br>
- got the latest version of the .net connector<br>
- tried on two different machines to rule out any local errors</p>
<p>So far nothing's worked.</p>
<p>If anyone could offer any ideas or suggestions they would be very much appreciated.</p>
| <p>from what I understand the SqlDataReader is intended to be used for a one-time enumeration of the data you've returned. Once you've cycled through the results once, the object has done its duty. Here are a couple ideas for working around this, one or the other of which may solve this for you depending on your needs:</p>
<ol>
<li><p>Re-execute the query to generate another SqlDataReader when needed</p></li>
<li><p>Instead of using the SqlDataReader, store the results of your original query into a System.Data.DataTable, where you can then re-read and manipulate the data however you like.</p></li>
</ol>
<p>Hope this helps!</p>
<p>Adam</p>
| <p>Since a datareader reads in information, your using block closes the connection to the reader just after assigning its value to the variable. <a href="http://www.simple-talk.com/dotnet/.net-framework/should-you-use-ado.net-datareader-or-dataset/" rel="nofollow noreferrer">Here is an article</a> that shows you some examples of code that might get you to where you need to be.</p>
<p>The key is that the connection MUST be open, when trying to read from the reader.</p>
| 32,426 |
<p>I'm building my first flex app and am currently bussy splitting it up in multiple components to make it maintainable.
I have a screen which holds a list that is displayed and filled after a succesfull login attempt:</p>
<p>Part of the main app:</p>
<pre><code><mx:ViewStack id="vsAdmin" height="100%" width="100%">
<mx:TabNavigator id="adminTabs" width="100%" height="100%" historyManagementEnabled="false">
<myComp:compBeheerdersAdmin id="beheerdersViewstackA"/>
</mx:TabNavigator>
</mx:ViewStack>
</code></pre>
<p>In the component compBeheerdersAdmin there is a function requestBeheerdersList() that gets the data from the server and Binds it to the list through a handler.</p>
<p>After login the following code from the main app:</p>
<pre><code>mainViewstack.selectedChild = vsAdmin;
//beheerdersViewstackA.createComponentsFromDescriptors();
beheerdersViewstackA.requestBeheerdersList();
</code></pre>
<p>The function requestBeheerdersList() does nothing (is not reached, i put a alert as first statement in the function but that is not displayed) when i login after a fresh load of the swf, but when i logout and login again, then the function is reached and the alert is displayed and the list is filled with the data from the server.
Any ideas?</p>
| <p>I would make sure the component exists that you are calling before calling the next function. This could be done by forcing creationPolicy=all as you figured out. You could also add an event listener for the CreationComplete to call the function you want:</p>
<pre><code>private function doThisFirst():void{
mainViewstack.selectedChild = vsAdmin;
vsAdmin.addEventListener(FlexEvent.CREATION_COMPLETE,doThis);
}
private function doThis():void{
beheerdersViewstackA.requestBeheerdersList();
}
</code></pre>
<p>This may not be exactly correct but I tried to recreate to your specific example. If you are familiar with viewstack creation of its children and eventlisteners you should be able to fit this to your specific need.</p>
| <p>Alternatively, you can have creationComplete define in your mxml</p>
<pre><code><mx:Canvas ... creationComplete="onCreationComplete()">
<mx:Script>
<![CDATA[
private function onCreationComplete():void {
requestBeheerdersList()
}
]]>
</mx:Script>
</code></pre>
<p>or possibly </p>
<pre><code><mx:Canvas ... creationComplete="requestBeheerdersList()">
</code></pre>
<p>The difficulty with Flex is to understand how a mxml component maps the equivalent pure actionscript class. When you have in your mxml code something like <local:Mycomponent id="myComponent">, this add as child an instance of a class. The mxml file, Mycomponent.mxml, defines the class. Unless declared as static, the functions that are listed within the <mx:Script> tag are functions that apply on the instance. There is no constructor that you can explicitly define but the instance is not available before actual creation. You therefore have to rely on creationComplete to execute any function that you would have called from a constructor function in a strictly AS3 class. </p>
| 24,074 |
<p>I want to add a constant value onto an incoming bound integer. In fact I have several places where I want to bind to the same source value but add different constants. So the ideal solution would be something like this...</p>
<pre><code><TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=5}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=8}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=24}"/>
</code></pre>
<p>(NOTE: This is an example to show the idea, my actual binding scenario is not to the canvas property of a TextBox. But this shows the idea more clearly) </p>
<p>At the moment the only solution I can think of is to expose many different source properties each of which adds on a different constant to the same internal value. So I could do something like this...</p>
<pre><code><TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus5}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus8}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus24}"/>
</code></pre>
<p>But this is pretty grim because in the future I might need to keep adding new properties for new constants. Also if I need to change the value added I need to go an alter the source object which is pretty naff. </p>
<p>There must be a more generic way than this? Any WPF experts got any ideas?</p>
| <p>I use a <code>MathConverter</code>that I created to do all simple arithmatic operations with. The code for the converter is <a href="http://rachel53461.wordpress.com/2011/08/20/the-math-converter/" rel="noreferrer">here</a> and it can be used like this:</p>
<pre><code><TextBox Canvas.Top="{Binding SomeValue,
Converter={StaticResource MathConverter},
ConverterParameter=@VALUE+5}" />
</code></pre>
<p>You can even use it with more advanced arithmatic operations such as</p>
<pre><code>Width="{Binding ElementName=RootWindow, Path=ActualWidth,
Converter={StaticResource MathConverter},
ConverterParameter=((@VALUE-200)*.3)}"
</code></pre>
| <p>I've never used WPF, but I have a possible solution.</p>
<p>Can your binding Path map to a Map? If so, it should then be able to take an argument (the key). You'd need to create a class that implements the Map interface, but really just returns the base value that you initialized the "Map" with added to the key.</p>
<pre><code>public Integer get( Integer key ) { return baseInt + key; } // or some such
</code></pre>
<p>Without some ability to pass the number from the tag, I don't see how you're going to get it to return different deltas from the original value.</p>
| 15,260 |
<p>How can I call a BizTalk Orchestration dynamically knowing the Orchestration name? </p>
<p>The call Orchestration shapes need to know the name and parameters of Orchestrations at design time. I've tried using 'call' XLang keyword but it also required Orchestration name as Design Time like in expression shape, we can write as </p>
<pre><code>call BizTalkApplication1.Orchestration1(param1,param2);
</code></pre>
<p>I'm looking for some way to specify calling orchestration name, coming from the incoming message or from SSO config store.</p>
<p>EDIT: I'musing BizTalk 2006 R1 (ESB Guidance is for R2 and I didn't get how it could solve my problem) </p>
| <p>The way I've accomplished something similar in the past is by using direct binding ports in the orchestrations and letting the MsgBox do the dirty work for me. Basically, it goes something like this:</p>
<ol>
<li>Make the callable orchestrations use a direct-bound port attached to your activating receive shape.</li>
<li>Set up a filter expression on your activating receive shape with a custom context-based property and set it equal to a value that uniquely identifies the orchestration (such as the orchestration name or whatever)</li>
<li>In the calling orchestration, create the message you'll want to use to fire the new orchestration. In that message, set your custom context property to the value that matches the filter used in the specific orchestration you want to fire.</li>
<li>Send the message through a direct-bound send port so that it gets sent to the MsgBox directly and the Pub/Sub mechanisms in BizTalk will take care of the rest.</li>
</ol>
<p>One thing to watch out in step 4: To have this work correctly, you will need to create a new Correlation Set type that includes your custom context property, and then make sure that the direct-bound send port "follows" the correlation set on the send. Otherwise, the custom property will only be written (and not promoted) to the msg context and the routing will fail.</p>
<p>Hope this helps!</p>
| <p>Look at ESB Guidance (www.codeplex.com/esb) This package provides the functionality you are looking for</p>
| 9,787 |
<p>How do you setup an asp.net sql membership role/membership provider on a production machine? I'm trying to setup BlogEngine.NET and all the documentation says to use the ASP.NET Website Administration tool from Visual Studio but that isn't available on a production machine. Am I the first BlogEngine user to use it on a non-development box?</p>
<p>The SQL server is completely blocked off from everything but the production box, I do have SQL Management Studio on there though.</p>
<p>EDIT: I mean, how do you add new users/roles, not how do you create the tables. I've already ran aspnet_regsql to create the schema.</p>
<p>EDIT2: MyWSAT doesn't work because it requires an initial user in the database as well. I need an application that will allow me to create new users in the membership database without any authentication, just a connection string.</p>
| <p>I solved this problem by setting up a default super user at application start up.</p>
<p>By adding this to gobal.asax</p>
<pre>
<code>
void Application_Start(object sender, EventArgs e)
{
// Code that runs on application startup
// check that the minimal security settings are created
Security.SetupSecurity();
}
</code>
</pre>
<p>Then in the security class:</p>
<pre>
using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
///
/// Creates minimum roles and user for application access.
///
public class Security
{
// application roles
public static string[] applicationRoles =
{ "Roles1", "Roles2", "Roles3", "Roles4", "Roles5" };
// super user
private static string superUser = "super";
// default password, should be changed on first connection
private static string superUserPassword = "default";
private Security()
{
//
// TODO: Add constructor logic here
//
}
///
/// Creates minimal membership environment.
///
public static void SetupSecurity()
{
SetupRoles();
SetupSuperuser();
}
///
/// Checks roles, creates missing.
///
public static void SetupRoles()
{
// create roles
for (int i = 0; i
/// Checks if superuser account is created.
/// Creates the account and assigns it to all roles.
///
public static void SetupSuperuser()
{
// create super user
MembershipUser user = Membership.GetUser(superUser);
if (user == null)
Membership.CreateUser(superUser, superUserPassword, "[email protected]");
// assign superuser to roles
for (int i = 0; i
</pre>
<p>Once you have a default user, you can use AspNetWSAT or other.</p>
| <p>You'll have to have .NET 2.0 installed on the machine, all the VS tool is is a GUI wrapper for a command line tool which is part of the framework.</p>
<p>Check C:\Windows\Microsoft.NET\Framework\v2.0.50727 for the app aspnet_regsql.exe</p>
<p>/? for command line switches, /W for a wizard mode</p>
| 18,943 |
<p>I have a few questions related:</p>
<p>1) Is possible to make my program change filetype association but only when is running? Do you see anything wrong with this behavior?</p>
<p>2) The other option that I'm seeing is to let users decide to open with my application or restore default association ... something like: "capture all .lala files" or "restore .lala association". How can I do this? What do you think that is the best approach?</p>
| <p>Regarding file associations, I've wrote an answer earlier that at least <a href="https://stackoverflow.com/questions/212906/script-to-associate-an-extension-to-a-program#212921">covers the "How"</a>.</p>
<p>This should also point you to the right direction how to handle backup and restore. With direct registry access through c#, there will be no need to use .reg files, so you are free to back up the previous value however you like in your app, and also restore it from there.</p>
<p>The key question here is: <em>Should</em> you change file associations randomly? At least asking the user up-front would obviously be necessary (as you also indicated). </p>
<p>Furthermore, Vista users with UAC enabled, or non-privileged users of other Windows versions may not have the required permission to change global file associations. The (un)installation procedure of your program may be the only place where this can succeed. </p>
<p>EDIT</p>
<p>As <a href="https://stackoverflow.com/questions/222561/filetype-association-with-application-c#222799">Franci Penov indicated in his answer</a>, there <em>is</em> a way to change local file associations on a per-user basis, even for non-admins (that's why I spoke of "global associations" in the previous paragraph). He also mentioned mentioned why going there is not overly advisable.</p>
| <p>1) you get to define the file types that are in the Open Dialog file type droplist.
Outside of that, it's <em>possible</em> to change the filetype default on application open, and then replace during application close, as file type association are just a registry setting.</p>
<p>As for wrong, I wouldn't. First reason is that it's not the standard behavior of applications. The second reason is that if your application or PC exits unexpectedly, you run the risk of not returning the association to it's original setting.</p>
<p>2) Windows by default allows user to choose these options utilizing the right-click and the "open with" command. </p>
| 27,460 |
<p>What would prevent one machine from consuming a Silverlight Enabled WCF service some of the time, whilst another on the same network domain, behind the same proxy / firewall / etc.. can fine? Service and app are on the same domain.</p>
<p>I'm writing to the event log when the service comes in, and I can't even see the call come through. </p>
<p>Any pointers greatly appreciated.</p>
<p><strong>Update:</strong> After using fiddler, it seems to work if we have fiddler running, but not if we don't. I'm more confused than ever!</p>
| <p>Install <a href="http://www.fiddlertool.com/" rel="nofollow noreferrer">Fiddler</a> and see what is going on. No other good way to figure this out (sometimes even fiddler can't help).</p>
| <p>Install <a href="http://www.fiddlertool.com/" rel="nofollow noreferrer">Fiddler</a> and see what is going on. No other good way to figure this out (sometimes even fiddler can't help).</p>
| 43,319 |
<p>I've an linux image(debian) running on VMWare ESX 3.1.
Is it possible to copy that image and run it locally on my local VMWare workstation?
how?</p>
| <p>I <em>believe</em> that while ESX (commercial) is reverse-compatible to Server (free), Server is <strong>NOT</strong> forward-compatible to ESX.</p>
<p>Therefore, you can import Server images to ESX, but not the other way around.</p>
<p>You <em>may</em> be able to go from an ESX host to an ESXi (free) host, however.</p>
| <p>Kasper, what version of ESX are you running? Trying to export an image from our ESX server always ends in a I/O error (don't have access to the server right now so I cannot remember the exact error message).</p>
<p>Would love to get this feature working but I not found anything on the web that might help with this specific error.</p>
| 41,880 |
<p>I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.</p>
<p>One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.</p>
<p>Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?</p>
<p><em>Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.</em></p>
| <p>Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.</p>
<p>In /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.</p>
<p>This won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.</p>
<p>I wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.</p>
| <p>You know, you should look at static linking.</p>
<p>These days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore. </p>
<p>You could also try running your applications in chroot() jails and distributing those.</p>
| 6,099 |
<p>With a seriously big .NET site/solution (100's of assemblies), are there any tools available to recognise which assemblies have changed since the last build (using something more intelligent than file dates that will always change).</p>
<p>I need to change our deployment process to a) increment the version of changed assemblies and b) generate a delta release to include these modified assemblies. </p>
<p>My current approach for our ASP.NET web site and Biztalk servers is to re-deploy the full solution after a build - this can take up to 3 hours (most of the time is spent undeploying and redeploying the BizTalk applications).</p>
<p>Microsoft recommend that we version our assemblies and only deploy those assemblies that have changed to reduce our deployment window. I would like to automate this as much as possible.</p>
<p>We are currently using MSBuild, TFS and good "old fashioned" people to manage our build process.</p>
| <p>Have a look at <a href="http://www.ndepend.com/" rel="nofollow noreferrer">NDepend</a>. I believe you can use it to generate reporots of just that sort of thing and use these as inputs to your deployment process</p>
| <p>One of the great traceability features in Team Build is how it tracks changesets, as well as, work items associated to changesets from build to build. </p>
<p>We use these build summaries to help us generate our release notes between versions. We use a build definition what we run manually when we are ready to deploy a validated version, where the last time the build was run was the date we deployed the previous validated version. This roles up all changes between those two build dates and provides us a comprehensive report for generating the release notes.</p>
| 43,796 |
<p>I was wondering why <code>shared_ptr</code> doesn't have an implicit constructor. The fact it doesn't is alluded to here: <a href="https://stackoverflow.com/questions/142391/getting-a-boostsharedptr-for-this">Getting a boost::shared_ptr for this</a></p>
<p>(I figured out the reason but thought it would be a fun question to post anyway.)</p>
<pre><code>#include <boost/shared_ptr.hpp>
#include <iostream>
using namespace boost;
using namespace std;
void fun(shared_ptr<int> ptr) {
cout << *ptr << endl;
}
int main() {
int foo = 5;
fun(&foo);
return 0;
}
/* shared_ptr_test.cpp: In function `int main()':
* shared_ptr_test.cpp:13: conversion from `int*' to non-scalar type `
* boost::shared_ptr<int>' requested */
</code></pre>
| <p>In this case, the shared_ptr would attempt to free your stack allocated int. You wouldn't want that, so the explicit constructor is there to make you think about it.</p>
| <pre><code>int main() {
int foo = 5;
fun(&foo);
cout << foo << endl; // ops!!
return 0;
}
</code></pre>
| 39,071 |
<p>How does one read a data file in an iPhone project? For example, lets say I have a static file called "level.dat" that is structured as follows:
obstacles: 10
time: 100
obstacle1: 10,20
...</p>
<p>I would like to read the contents of the file into a NSString then do the parsing. How do I read the contents of a file into a string? Also, where in the project should the "level.dat" file reside? Should it be under "Resources" or just in the main directory?</p>
<p>Thanks in advance!</p>
| <p>See this answer: <a href="https://stackoverflow.com/questions/318341/how-to-fopen-on-the-iphone">How to fopen() on the iPhone?</a> which shows how to get access to resources in your bundle. Once you have the path, just use [NSString stringWithContentsOfFile:encoding:error:].</p>
<pre><code>NSString *path = [[NSBundle mainBundle] pathForResource: @"level" ofType: @"dat"]
NSError *error = nil;
NSString *data = [NSString stringWithContentsOfFile: path
encoding: NSUTF8StringEncoding
error: &error];
</code></pre>
| <p>If you need help parsing the data string, there's a helpful article on <a href="http://www.macresearch.org/cocoa_for_scientists_part_xvii_representing_data_in_tables" rel="nofollow noreferrer">Cocoa For Scientist</a></p>
| 41,635 |
<p>I have seen a few (old) posts on the 'net about hacking together some support for pre-compiled headers in CMake. They all seem a bit all-over the place and everyone has their own way of doing it. What is the best way of doing it currently?</p>
| <p>There is a <a href="https://github.com/sakra/cotire" rel="noreferrer">third party CMake module named 'Cotire'</a> which automates the use of precompiled headers for CMake based build systems and also supports unity builds.</p>
| <p>Don't even go there. Precompiled headers mean that whenever one of the headers changes, you have to rebuild <em>everything</em>. You're lucky if you have a build system that realizes this. More often than never, your build will just fail until you realize that you changed something that is being precompiled, and therefore you need to do a full rebuild. You can avoid this mostly by precompiling the headers that you are absolutely positive won't change, but then you're giving up a large part of the speed gain as well.</p>
<p>The other problem is that your namespace gets polluted with all kinds of symbols that you don't know or care about in many places where you'd be using the precompiled headers.</p>
| 17,897 |
<p>Tomcat (version 5 here) stores session information in memory. When clustering this information is periodically broadcast to other servers in the cluster to keep things in sync. You can use a database store to make sessions persistant but this information is only written periodically as well and is only really used for failure-recovery rather than actually replacing the in-memory sessions.</p>
<p>If you don't want to use sticky sessions (our configuration doesn't allow it unfortunately) this raises the problem of the sessions getting out of sync.</p>
<p>In other languages, web frameworks tend to allow you to use a database as the primary session store. Whilst this introduces a potential scaling issue it does make session management very straightforward. I'm wondering if there's a way to get tomcat to use a database for sessions in this way (technically this would also remove the need for any clustering configuration in the tomcat server.xml).</p>
| <p>There definitely is a way. Though I'd strongly vote for sticky sessions - saves so much load for your servers/database (unless something fails)...</p>
<p><a href="http://tomcat.apache.org/tomcat-5.5-doc/config/manager.html" rel="nofollow noreferrer">http://tomcat.apache.org/tomcat-5.5-doc/config/manager.html</a> has information about SessionManager configuration and setup for Tomcat. Depending on your exact requirements you might have to implement your own session manager, but this starting point should provide some help.</p>
| <p>Take a look at <a href="http://www.terracotta.org/" rel="nofollow noreferrer">Terracotta</a>, I think it can address your scaling issues without a major application redesign.</p>
| 10,802 |
<p>I am the owner of a pretty Anycubic Mega I3 and it was very cool to own it. </p>
<p>However, now I have several problems when printing with it. It clicks all along, at high or low temperature, at 5 mm above the plate, and the result is very disgusting. It is the same with the basic black PLA, or with other PLA from ICE-Filaments but I can't do anything. </p>
<p>I use Cura and I've reset it several times, using the defaults options or not.</p>
<p>Here are two examples of some prints (normal cube): </p>
<p><a href="https://cdn.discordapp.com/attachments/380420138920574986/455045425636835338/Snapchat-1209640225.jpg" rel="noreferrer">example 1</a> </p>
<p><a href="https://i.stack.imgur.com/rSqA2.jpg" rel="noreferrer" title="First example of poorly printed cube"><img src="https://i.stack.imgur.com/rSqA2.jpg" alt="First example of poorly printed cube" title="First example of poorly printed cube"></a></p>
<p>and <a href="https://cdn.discordapp.com/attachments/380420138920574986/455045425636835340/Snapchat-27434386.jpg" rel="noreferrer">Example 2</a></p>
<p><a href="https://i.stack.imgur.com/hb1dJ.jpg" rel="noreferrer" title="Second example of poorly printed cube"><img src="https://i.stack.imgur.com/hb1dJ.jpg" alt="Second example of poorly printed cube" title="Second example of poorly printed cube"></a></p>
| <p>I redid the print in order to reply to some questions posed in the answer of @kdtop. The print started but the output was not consistent and sometimes stopped. The temperature is 195°C and sometimes 'drop' to 194°C. First I pushed the new real so that the extruder did not need to pull so much. When this did not solve the problem I changed the temperature to 200°C. Now the output became consistent and my print finished. It was not as good as the one that I did with my previous filament. The top was not as neatly closed. Only the last 2 layers covered more or less for 100% the surface (perhaps 200°C is too high for this?).</p>
<p>For me the solution is to higher the temperature to 200°C (or perhaps 205°C).</p>
| <p>My slicer (Cura-lulzbot) has a setting for initial printing temp, and then printing temp after the first few layers. Is it possible that your temp is initially OK, but then drops too low? Does your printer have a readout that shows the current temp? Is the temp still OK when it stops?</p>
<p>It sounds like you are printing a sample cube, so I assume not too large. Could you simulate this by just directly command your printer to extrude 500 mm of filament, or longer? Then see if it clogs. That would tell you if it was a physical problem with your printer instead of some change specified by the G-code for a sliced print. </p>
| 927 |
<p>It's common to want browsers to cache resources - JavaScript, CSS, images, etc. until there is a new version available, and then ensure that the browser fetches and caches the new version instead.</p>
<p>One solution is to embed a version number in the resource's filename, but will placing the resources to be managed in this way in a directory with a revision number in it do the same thing? Is the whole URL to the file used as a key in the browser's cache, or is it just the filename itself and some meta-data?</p>
<p>If my code changes from fetching <code>/r20/example.js</code> to <code>/r21/example.js</code>, can I be sure that revision 20 of <code>example.js</code> was cached, but now revision 21 has been fetched instead and it is now cached?</p>
| <p>Yes, any change in <em>any part</em> of the URL (excluding HTTP and HTTPS protocols changes) is interpreted as a different resource by the browser (and any intermediary proxies), and will thus result in a separate entity in the browser-cache.</p>
<p><strong>Update:</strong></p>
<p>The claim in <a href="http://www.thinkvitamin.com/features/webapps/serving-javascript-fast" rel="noreferrer">this ThinkVitamin article</a> that Opera and Safari/Webkit browsers don't cache URLs with ?query=strings is <strong>false</strong>.</p>
<p>Adding a version number parameter to a URL is a perfectly acceptable way to do cache-busting.</p>
<p>What may have confused the author of the ThinkVitamin article is the fact that hitting Enter in the address/location bar in Safari and Opera results in different behavior for URLs with query string in them.</p>
<p>However, (<em>and this is the important part!</em>) Opera and Safari <strong>behave just like IE and Firefox</strong> when it comes to caching embedded/linked images and stylesheets and scripts in web pages - regardless of whether they have "?" characters in their URLs. (This can be verified with a simple test on a normal Apache server.)</p>
<p>(I would have commented on the currently accepted answer if I had the reputation to do it. :-)</p>
| <p>In most browsers the full url is used.
In some browsers, if you have a query in the url, the document will never be cached.</p>
| 10,954 |
<p>I'm trying to encrypt some integers in java using java.security and javax.crypto. </p>
<p>The problem seems to be that the Cipher class only encrypts byte arrays. I can't directly convert an integer to a byte string (or can I?). What is the best way to do this?</p>
<p>Should I convert the integer to a string and the string to byte[]? This seems too inefficient.</p>
<p>Does anyone know a quick/easy or efficient way to do it?</p>
<p>Please let me know.</p>
<p>Thanks in advance.</p>
<p>jbu</p>
| <p>You can turn ints into a byte[] using a DataOutputStream, like this:</p>
<pre><code>ByteArrayOutputStream baos = new ByteArrayOutputStream ();
DataOutputStream dos = new DataOutputStream (baos);
dos.writeInt (i);
byte[] data = baos.toByteArray();
// do encryption
</code></pre>
<p>Then to decrypt it later:</p>
<pre><code>byte[] decrypted = decrypt (data);
ByteArrayInputStream bais = new ByteArrayInputStream (data);
DataInputStream dis = new DataInputStream (bais);
int j = dis.readInt();
</code></pre>
| <p>create a 4-byte array and copy the int to the array in 4 steps, with bitwise ANDs and bitshifting, like Paulo said.</p>
<p>But remember that block algorithms such as AES and DES work with 8 or 16 byte blocks so you will need to pad the array to what the algorithm needs. Maybe leave the first 4 bytes of an 8-byte array as 0's, and the other 4 bytes contain the integer.</p>
| 41,651 |
<p>We have a website, where transactions are entered in and put through a workflow. We are going to follow the standard BLL(Business Logic Layer), DTO(Data Transfer Object), DAL(Data Access Layer) etc. for a tiered application. We have the need to separate everything out because some transactions will cross multiple applications with different business logic.</p>
<p>We also have a backend processor. It handles our transactions once the workflow has been completed. It works with various third party systems, some of which are unstable, or the interface to them is unstable, and then reports the status of the transaction. Each website will have its own version of the backend processor.</p>
<p>Now the question, with N-Tier, they suggest a new BLL for each application. With the layout of the application above, it can be argued that the backend processor and website is one application acting in unison, or two applications with different business logic. What would be the ideal way to handle this? Have it act like one system, or two?</p>
| <p>One thing that I picked up on while learning MVC over the last couple years is the difference between what I call application logic and domain logic. I don't like the term business logic anymore, because it has too much baggage from all the conflicting theories and practices that have used that term too loosely.</p>
<p>Domain logic is the "traditional" business logic, how things are supposed to act, what they require (validation), etc. Application logic is anything that is specific to a given presentation of your domain, IE when the user clicks this submit button in your web app then they are directed to this web page over here (note that this has <em>nothing</em> to do with how a WinForms app or a background processor would work). Application logic should live in your application. Domain logic should live in your BLL and lower, and be reusable across the different applications that may use your common "business logic".</p>
<p>Kind of a general answer, but I hope that helps.</p>
| <p>The "Ideal" way to do this depends on the project at hand and the various requirements of the system.</p>
<p>My default design is to have it act as one app. But if there are more heavyweight processes taking place, I like to create a batching process where the parameters of the requested job are stored and acted upon by a seperate process.</p>
| 32,567 |
<p>I have two applications under <code>tomcat/webapps</code> folder. </p>
<pre><code>tomcat/webapps/App1
tomcat/webapps/App2
</code></pre>
<p>Both applications share the same libraries. Which are stored for example in <code>tomcat/webapps/App1/WEB-INF/lib</code>.</p>
<p>Are both libraries loaded twice in memory?</p>
<p>Should I put these shared libraries in <code>tomcat/server/lib</code>?</p>
| <p>As you can see <a href="http://tomcat.apache.org/tomcat-6.0-doc/class-loader-howto.html" rel="noreferrer">here</a>, Tomcat creates one class-loader per webapp on your server.
Thus, if you have webapp1 and webapp2 that share the same library, then this library will be indeed loaded twice.</p>
<p>You can eventually place this library in the common directory (tomcat-dir/common/lib) if it is shared by <strong>all</strong> webapps that run on your Tomcat server.</p>
| <p>PermGen Space of heap is used to store classes and Meta data about classes in Java.</p>
<p>Error java.lang.OutOfMemoryError: PermGen space can occurred frequently because we are loading lots of duplicate library in apache tomcat
can anyone share about it in details</p>
| 33,616 |
<p>Is there a way to use form fields that does not correspond to database field for temporary processings?</p>
<p>I.e. I want to add:</p>
<ul>
<li>temp fields <strong>item1</strong>, <strong>item2</strong></li>
<li>database field <strong>sum</strong></li>
<li>button with record hook that sets <strong>sum</strong> = <strong>item1</strong> + <strong>item2</strong></li>
</ul>
| <p>As far as I know it's simply not possible with ClearQuest.
I've tried to do something similar and was told by our IBM consultant that the only way is to create a DB field for all variables.</p>
| <p>ClearQuest schema designers often include 'temporary' fields in their record types. They do this so they perform operations on hooks to generate another value.</p>
<p>For example, for the Notes fields, there is a 'temporary' Notes_entry field that the user types the most recent note into, and when the record is saved, the value is added to the Notes_Log field. The next time the record is edited the Notes_entry field is cleared so the user can type a new Notes_entry.</p>
| 19,040 |
<p>Consider a regular web application doing mostly form-based CRUD operations over SQL database. Should there be explicit transaction management in such web application? Or should it simply use autocommit mode? And if doing transactions, is "transaction per request" sufficient?</p>
| <p><strong>I would only use explicit transactions when you're doing things that are actually transactional</strong>, e.g., issuing several SQL commands that are highly interrelated. I guess the classic example of this is a banking application -- withdrawing money from one account and depositing it in another account must always succeeed or fail as a batch, otherwise someone gets ripped off!</p>
<p>We use transactions on SO, but only sparingly. Most of our database updates are standalone and atomic. Very few have the properties of the banking example above.</p>
| <p>You should use transactions given that different users will be hitting the database at the same time. I would recommend you do <em>not</em> use autocommit. Use explicit transaction brackets. As to the resolution of each transaction, you should bracket a particular unit of work (whatever that means in your context).</p>
<p>You might also want to look into the different transaction isolation levels that your SQL database supports. They will offer a range of behaviours in terms of what reading users see of partially updated records.</p>
| 21,396 |
<p>Sometimes you need to upgrade the database with many rows that you have in a datatable or you have an array full of data, instead of putting all this data together in a string and then splitting in SQL SERVER, or instead of iterating the datatable in the code row by row and updating database, is there any other way? Is there other type of variables besides the traditional ones in SQL SERVER 2005?</p>
| <p>There's a few ways to do this.</p>
<p>If you're simply inserting rows, then I would create a DataTable object with the information in it, then use the SqlBulkCopy object:</p>
<pre><code>SqlBulkCopy copier = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.Default);
copier.BatchSize = 500; //# of rows to insert at a time
copier.DestinationTableName = "dbo.MyTable";
copier.WriteToServer(myDataTable);
</code></pre>
<p>Another option is to wrap your data in xml (however you want to do that), and send it to your stored procedure (which does whatever you need it to do) using the sql 2005 'xml' data type</p>
| <p>John mentioned using XML... and that's the approach I would use for your situation (SQL Server 2005, and making a sproc that handles the SQL for you).</p>
<p>Here's an example of how to get started (this is just a select statement, but you can make it an update if you want):</p>
<pre><code>CREATE PROCEDURE MySproc ( @Accounts XML )
AS
SELECT
Accounts.AccountID.query('.')
FROM
@Accounts.nodes('//ID/text()') AS Accounts(AccountID)
GO
EXEC MySproc '<Accounts><ID>123</ID><ID>456</ID></Accounts>'
</code></pre>
| 31,281 |
<p>I would like to set some vim options in one file in the comments section.</p>
<p>For example, I would like to set this option in one file</p>
<pre><code>set syntax=python
</code></pre>
<p>The file does not have a .py extension and I am not interested in making my vim installation recognise all files with this extension as python files.</p>
<p>I know this can be done because I have seen it, but my googling for this has not yet been fruitful.</p>
| <p>You're wanting a <a href="http://vim.wikia.com/wiki/Modeline_magic" rel="noreferrer">modeline</a> syntax, e.g.</p>
<pre><code># vim: set syntax=python:
</code></pre>
<p>See: <a href="http://vim.wikia.com/wiki/Modeline_magic" rel="noreferrer">Modeline magic</a> at Vim Wikia for more details.</p>
| <p>You override the Vim options by adding the modeline near the top or the bottom of the file, such as:</p>
<pre><code>// vim: set syntax=python:
</code></pre>
<p>or:</p>
<pre><code>/* vim: set syntax=python: */
</code></pre>
<p>or like:</p>
<pre><code># vim: set syntax=python ts=4 :
</code></pre>
<p>Other examples (from <a href="http://vim.wikia.com/wiki/Modeline_magic#Examples" rel="noreferrer">wikia</a>):</p>
<pre><code>// vim: noai:ts=4:sw=4
-or-
/* vim: noai:ts=4:sw=4
*/
-or-
/* vim: set noai ts=4 sw=4: */
-or-
/* vim: set fdm=expr fde=getline(v\:lnum)=~'{'?'>1'\:'1': */
</code></pre>
<p>Here is the example which I'm using (on the last line of the file):</p>
<pre><code># vim: set ts=2 sts=2 et sw=2 ft=python:
</code></pre>
<p>Few highlights:</p>
<ul>
<li>Vim executes a modeline only when <code>modeline</code> is set to <code>modeline</code> or a possitive integer and you're not root (some OS such as Debian, Ubuntu, Gentoo, OSX, etc. disable modelines by default for security reasons), so you need to add <code>set modeline</code> into your <code>~/.vimrc</code> file (<code>:e $MYVIMRC</code>),</li>
<li>the line must be in the first or last few lines,</li>
<li>space between the opening comment and <code>vim:</code> is required,</li>
<li>location where vim checks for the modeline is controlled by the <code>modelines</code> variable (see: <code>:help 'modelines'</code>),</li>
<li>with <code>set</code>, the modeline ends at the first colon (<code>:</code>),</li>
<li>text other than "vim:" can be recognised as a modeline.</li>
</ul>
<p>Related:</p>
<ul>
<li><a href="http://vim.wikia.com/wiki/Modeline_magic" rel="noreferrer">Modeline magic</a> at Vim wikia</li>
<li><a href="https://security.stackexchange.com/q/36001/11825">Vim modeline vulnerabilities</a> at SS or Google: <em>vim modeline vulnerability</em></li>
</ul>
| 49,223 |
<p>I have a table that stores all the volunteers, and each volunteer will be assigned to an appropriate venue to work the event. There is a table that stores all the venues.</p>
<p>It stores the volunteer's appropriate venue assignment into the column <code>venue_id</code>.</p>
<pre><code>table: venues
columns: id, venue_name
table: volunteers_2009
columns: id, lname, fname, etc.., venue_id
</code></pre>
<p>Here is the function to display the list of volunteers, and the problem I am having is to display their venue assignment. I have never worked much with MySQL joins, because this is the first time I have joined two tables together to grab the appropriate info I need.</p>
<p>So I want it to go to the volunteers_2009 table, grab the venue_id, go to the venues table, match up <code>volunteers_2009.venue_id to venues.id</code>, to display <code>venues.venue_name</code>, so in the list it will display the volunteer's venue assignment.</p>
<p><img src="https://i.stack.imgur.com/83QdA.jpg" alt="alt text"></p>
<pre><code><?php
// -----------------------------------------------------
//it displays appropriate columns based on what table you are viewing
function displayTable($table, $order, $sort) {
$query = "select * from $table ORDER by $order $sort";
$result = mysql_query($query);
// volunteer's venue query
$query_venues = "SELECT volunteers_2009.venue_id, venues.venue_name FROM volunteers_2009 JOIN venues ON volunteers_2009.venue_id = venues.id";
$result_venues = mysql_query($query_venues);
if($_POST) { ?>
<table id="box-table-a">
<tr>
<th>Name</th>
<?php if($table == 'maillist') { ?>
<th>Email</th>
<?php } ?>
<?php if($table == 'volunteers_2008' || $table == 'volunteers_2009') { ?>
<th>Comments</th>
<?php } ?>
<?php if($table == 'volunteers_2009') { ?>
<th>Interests</th>
<th>Venue</th>
<?php } ?>
<th>Edit</th>
</tr>
<tr>
<?php
while($row = mysql_fetch_array($result))
{
$i = 0;
while($i <=0)
{
print '<td>'.$row['fname'].' '.$row['lname'].'</td>';
if($table == 'maillist') {
print '<td><a href="mailto:'.strtolower($row['email']).'">'.strtolower($row['email']).'</a></td>';
}
if($table == 'volunteers_2008' || $table == 'volunteers_2009') {
print '<td><small>'.substr($row['comments'], 0, 32).'</small></td>';
}
if($table == 'volunteers_2009') {
print '<td><small>1) '.$row['choice1'].'<br>2) '.$row['choice2'].'<br>3) '.$row['choice3'].'</small></td>'; ?>
<td> <?php
if($row_venues['venue_name'] != '') {
// print venue assigned
print $row_venues['venue_id'].' '.$row_venues['venue_name'].' ';
} else { print 'No Venue Assigned'; } ?>
</td> <?php
} ?>
<td><a href="?mode=upd&id=<?= $row[id] ?>&table=<?= $table ?>">Upd</a> / <a href="?mode=del&id=<?= $row[id] ?>&table=<?= $table ?>" onclick="return confirm('Are you sure you want to delete?')">Del</a></td> <?php
$i++;
}
print '</tr>';
}
print '</table>';
}
}
// -----------------------------------------------------
?>
</code></pre>
| <p>We had the same behaviour as Todd Smith mentions. After the upgrade to .Net 3.5 SP1 we got this error.</p>
<p>In our case the reason was the <strong>protection / obfuscation</strong> of our DLLs with Remotesoft Protector.</p>
<p>This is most likely only our problem and not a solution for anyone else. Nevertheless I thought I'd mention it here cause it did cost us some hours to find out.</p>
<p>If you protect, obfuscate your DLLs: disabling that might be worth a try.</p>
| <p>I'm not familiar with this error but have you verified that your security/permissions are set correctly on the dedicated server? </p>
<p>Also you might want to confirm that you can access the feed directly from a browser on the dedicated server.</p>
<p><a href="http://forums.asp.net/p/675515/675515.aspx" rel="nofollow noreferrer">This link</a> may be helpful.</p>
| 24,280 |
<p>I have a generic list of objects in C#, and wish to clone the list. The items within the list are cloneable, but there doesn't seem to be an option to do <code>list.Clone()</code>.</p>
<p>Is there an easy way around this?</p>
| <p>You can use an extension method.</p>
<pre><code>static class Extensions
{
public static IList<T> Clone<T>(this IList<T> listToClone) where T: ICloneable
{
return listToClone.Select(item => (T)item.Clone()).ToList();
}
}
</code></pre>
| <p>I've made for my own some extension which converts ICollection of items that not implement IClonable</p>
<pre><code>static class CollectionExtensions
{
public static ICollection<T> Clone<T>(this ICollection<T> listToClone)
{
var array = new T[listToClone.Count];
listToClone.CopyTo(array,0);
return array.ToList();
}
}
</code></pre>
| 27,466 |
<p>I am a web-developer working in PHP. I have some limited experience with using Test Driven Development in C# desktop applications. In that case we used nUnit for the unit testing framework.</p>
<p>I would like to start using TDD in new projects but I'm really not sure where to begin. </p>
<p>What recommendations do you have for a PHP-based unit testing framework and what are some good resources for someone who is pretty new to the TDD concept?</p>
| <p>I've used both PHPUnit & <strong><a href="http://simpletest.org/" rel="noreferrer">SimpleTest</a></strong> and I found <strong>SimpleTest</strong> to be easier to use.</p>
<p>As far as TDD goes, I haven't had much luck with it in the purest sense. I think that's mainly a time/discipline issue on my part though.</p>
<p>Adding tests after the fact has been somewhat useful but my favorite things to do is use write SimpleTest tests that test for specific bugs that I have to fix. That makes it very easy to verify that things are actually fixed and stay fixed.</p>
| <p>Test driven development is an approach where tests are always written before code.
You should learn to <a href="https://phpunit.de/" rel="nofollow noreferrer">PHPUNIT</a> first in order to start TDD Development. Then while making your function you should always think how function can fail and write test case in phpunit and in the end you should write code in order to pass your test. Its will be a new approach so it will be little difficult in the beginning but once you get use to it, you will find it very useful specially for after development bugs and coding style. You can go through this <a href="http://vihasverma.blogspot.com/" rel="nofollow noreferrer">Step By Step</a> guide for understanding this concept. </p>
<p>Always Remember if test are written after development they are useless. So TDD is must if you are thinking to write unit tesst </p>
| 6,831 |
<p>I am trying to use the JQuery UI datepicker (latest stable version 1.5.2) on an IE6 website. But I am having the usual problems with combo boxes (selects) on IE6 where they float above other controls. I have tried adding the bgIframe plugin after declaring the datepicker with no luck.</p>
<p>My guess is that the .ui-datepicker-div to which I am attaching the bgIframe doesn't exist until the calendar is shown.</p>
<p>I am wondering if I can put the .bgIframe() command directly into the datepicker .js file and if so, where? (the similar control by kelvin Luck uses this approach)</p>
<p>Current code</p>
<p>$(".DateItem").datepicker({<br>
showOn:"button",<br>
... etc ...<br>
});<br>
$(".ui-datepicker-div").bgIframe();</p>
| <p>This should be taken care of for you by default.</p>
<p>The iframe gets included by default in IE6 in the datepicker. The style for it, called ui-datepicker-cover that handles the transparency. The only time this isn't the case is in the old themeroller code the style wasn't in there.</p>
| <p>I have noted Marc's comment that the ui-datepicker-cover style should handle this. In my case the right and bottom edges of the calendar would still show drop downs through them.</p>
<p>It looks like the size of the iFrame is initially being set by the following lines of code</p>
<pre><code>if ($.browser.msie && parseInt($.browser.version, 10) < 7) // fix IE < 7 select problems
$('iframe.ui-datepicker-cover').css({ width: inst.dpDiv.width() + 4, height: inst.dpDiv.height() + 4 });
</code></pre>
<p>in the postProcess function.</p>
<p>This size is then reset each time the date is changed by the line</p>
<pre><code>inst.dpDiv.empty().append(this._generateHTML(inst)).
find('iframe.ui-datepicker-cover').
css({ width: dims.width, height: dims.height });
</code></pre>
<p>My simplistic solution was to remove these two sets of code and fix the size of the cover style in the .css file</p>
<pre><code>//if ($.browser.msie && parseInt($.browser.version, 10) < 7) // fix IE < 7 select problems
// $('iframe.ui-datepicker-cover').css({ width: inst.dpDiv.width() + 4, height: inst.dpDiv.height() + 4 });
inst.dpDiv.empty().append(this._generateHTML(inst))//. <=== note the // before the .
//find('iframe.ui-datepicker-cover').
//css({ width: dims.width, height: dims.height });
</code></pre>
<p>in css file set the width of .ui-datepicker-cover to 220px, height to 200px</p>
<p>Steve</p>
| 19,168 |
<p>I've got a bunch of 3D vertex positions & need to generate a convex hull containing them; does anyone know of any QHull bindings for .NET? or native 3D Delaunay triangulation algorithms?</p>
| <p>A 3d delaunay is tricky, I'm not sure it's even possible to strictly define a delaunay constraint for a 3d surface.<br>
The normal technique if you just want to mesh a surface is to pick a direction and map that onto 2 coordinates and do a 2d delaunay. For a height map it's easy to just use x,y.
Then when you have the nodes forming each triangle you can of course use their 3d coordinates. </p>
<p>The best 2d code is probably <a href="http://www.cs.cmu.edu/~quake/triangle.html" rel="nofollow noreferrer">http://www.cs.cmu.edu/~quake/triangle.html</a><br>
This will also give you the convex hull</p>
| <p>Have a look at <a href="http://ozviz.wasp.uwa.edu.au/~pbourke/geometry/insidepoly/" rel="nofollow noreferrer">this site</a> that takes about 2D and 3D point finding in shapes.</p>
| 21,916 |
<p>How do I get a list of the files checked out by users (including the usernames) using P4V or P4? </p>
<p>I want to provide a depot location and see a list of any files under that location (including sub folders) that are checked out.</p>
| <p>From the command line:</p>
<pre><code>p4 opened -a //depot/Your/Location/...
</code></pre>
<p>The ... indicates that sub folders should be included.</p>
| <p>In p4v : try to do a rename of the top directory. You will get a warning and list of the currently checked out files with user names.</p>
| 16,124 |
<p>I want to create buttons with icons in Flex dynamically using Actionscript.</p>
<p>I tried this, with no success:</p>
<pre><code>var closeButton = new Button();
closeButton.setStyle("icon", "@Embed(source='images/closeWindowUp.png");
</code></pre>
| <p>I found an answer that works for me. In my .mxml file, I create Classes for the icons I will use:</p>
<pre><code>// Classes for icons
[Embed(source='images/closeWindowUp.png')]
public static var CloseWindowUp:Class;
[Embed(source='/images/Down_Up.png')]
public static var Down_Up:Class;
[Embed(source='/images/Up_Up.png')]
public static var Up_Up:Class;
</code></pre>
<p>In the Actionscript portion of my application, I use these classes when dynamically creating buttons:</p>
<pre><code>var buttonHBox:HBox = new HBox();
var closeButton:Button = new Button();
var upButton:Button = new Button();
var downButton:Button = new Button();
closeButton.setStyle("icon", SimpleWLM.CloseWindowUp);
buttonHBox.addChild(closeButton);
upButton.setStyle("icon", SimpleWLM.Up_Up);
buttonHBox.addChild(upButton);
downButton.setStyle("icon", SimpleWLM.Down_Up);
buttonHBox.addChild(downButton);
</code></pre>
| <p>I'm assuming you're adding it to the stage?</p>
<p>Also, I think your Embed is missing a close quote / paren.</p>
<pre><code>closeButton.setStyle("icon", "@Embed(source='images/closeWindowUp.png");
</code></pre>
<p>should be:</p>
<pre><code>closeButton.setStyle("icon", "@Embed(source='images/closeWindowUp.png')");
</code></pre>
| 38,499 |
<p>I have a simple web service, it takes 2 parameters one is a simple xml security token, the other is usually a long xml string. It works with short strings but longer strings give a 400 error message. maxMessageLength did nothing to allow for longer strings.</p>
| <p>You should remove the quotas limitations as well.
Here is how you can do it in code with Tcp binding.
I have added some code that shows removal of timeout problems because usually sending very big arguments causes timeout issues. So use the code wisely...
Of course, you can set these parameters in the config file as well.</p>
<pre><code> NetTcpBinding binding = new NetTcpBinding(SecurityMode.None, true);
// Allow big arguments on messages. Allow ~500 MB message.
binding.MaxReceivedMessageSize = 500 * 1024 * 1024;
// Allow unlimited time to send/receive a message.
// It also prevents closing idle sessions.
// From MSDN: To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding.’
binding.ReceiveTimeout = TimeSpan.MaxValue;
binding.SendTimeout = TimeSpan.MaxValue;
XmlDictionaryReaderQuotas quotas = new XmlDictionaryReaderQuotas();
// Remove quotas limitations
quotas.MaxArrayLength = int.MaxValue;
quotas.MaxBytesPerRead = int.MaxValue;
quotas.MaxDepth = int.MaxValue;
quotas.MaxNameTableCharCount = int.MaxValue;
quotas.MaxStringContentLength = int.MaxValue;
binding.ReaderQuotas = quotas;
</code></pre>
| <p>You should remove the quotas limitations as well.
Here is how you can do it in code with Tcp binding.
I have added some code that shows removal of timeout problems because usually sending very big arguments causes timeout issues. So use the code wisely...
Of course, you can set these parameters in the config file as well.</p>
<pre><code> NetTcpBinding binding = new NetTcpBinding(SecurityMode.None, true);
// Allow big arguments on messages. Allow ~500 MB message.
binding.MaxReceivedMessageSize = 500 * 1024 * 1024;
// Allow unlimited time to send/receive a message.
// It also prevents closing idle sessions.
// From MSDN: To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding.’
binding.ReceiveTimeout = TimeSpan.MaxValue;
binding.SendTimeout = TimeSpan.MaxValue;
XmlDictionaryReaderQuotas quotas = new XmlDictionaryReaderQuotas();
// Remove quotas limitations
quotas.MaxArrayLength = int.MaxValue;
quotas.MaxBytesPerRead = int.MaxValue;
quotas.MaxDepth = int.MaxValue;
quotas.MaxNameTableCharCount = int.MaxValue;
quotas.MaxStringContentLength = int.MaxValue;
binding.ReaderQuotas = quotas;
</code></pre>
| 16,434 |
<p>I read the Git manual, FAQ, Git - SVN crash course, etc. and they all explain this and that, but nowhere can you find a simple instruction like:</p>
<p>SVN repository in: <code>svn://myserver/path/to/svn/repos</code></p>
<p>Git repository in: <code>git://myserver/path/to/git/repos</code></p>
<pre><code>git-do-the-magic-svn-import-with-history \
svn://myserver/path/to/svn/repos \
git://myserver/path/to/git/repos
</code></pre>
<p>I don't expect it to be that simple, and I don't expect it to be a single command. But I do expect it not to try to explain anything - just to say what steps to take given this example.</p>
| <p>Create a users file (i.e. <code>users.txt</code>) for mapping SVN users to Git:</p>
<pre><code>user1 = First Last Name <[email protected]>
user2 = First Last Name <[email protected]>
...
</code></pre>
<p>You can use this one-liner to build a template from your existing SVN repository:</p>
<pre><code>svn log -q | awk -F '|' '/^r/ {gsub(/ /, "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > users.txt
</code></pre>
<p>SVN will stop if it finds a missing SVN user, not in the file. But after that, you can update the file and pick up where you left off.</p>
<p>Now pull the SVN data from the repository:</p>
<pre><code>git svn clone --stdlayout --no-metadata --authors-file=users.txt svn://hostname/path dest_dir-tmp
</code></pre>
<p>This command will create a new Git repository in <code>dest_dir-tmp</code> and start pulling the SVN repository. Note that the "--stdlayout" flag implies you have the common "trunk/, branches/, tags/" SVN layout. If your layout differs, become familiar with <code>--tags</code>, <code>--branches</code>, <code>--trunk</code> options (in general <code>git svn help</code>).</p>
<p>All common protocols are allowed: <code>svn://</code>, <code>http://</code>, <code>https://</code>. The URL should target the base repository, something like <a href="http://svn.mycompany.com/myrepo/repository" rel="noreferrer">http://svn.mycompany.com/myrepo/repository</a>. The URL string must <strong>not</strong> include <code>/trunk</code>, <code>/tag</code> or <code>/branches</code>.</p>
<p>Note that after executing this command it very often looks like the operation is "hanging/frozen", and it's quite normal that it can be stuck for a long time after initializing the new repository. Eventually, you will then see log messages which indicate that it's migrating.</p>
<p>Also note that if you omit the <code>--no-metadata</code> flag, Git will append information about the corresponding SVN revision to the commit message (i.e. <code>git-svn-id: svn://svn.mycompany.com/myrepo/<branchname/trunk>@<RevisionNumber> <Repository UUID></code>)</p>
<p>If a user name is not found, update your <code>users.txt</code> file then:</p>
<pre><code>cd dest_dir-tmp
git svn fetch
</code></pre>
<p>You might have to repeat that last command several times, if you have a large project until all of the Subversion commits have been fetched:</p>
<pre><code>git svn fetch
</code></pre>
<p>When completed, Git will checkout the SVN <code>trunk</code> into a new branch. Any other branches are set up as remotes. You can view the other SVN branches with:</p>
<pre><code>git branch -r
</code></pre>
<p>If you want to keep other remote branches in your repository, you want to create a local branch for each one manually. (Skip trunk/master.) If you don't do this, the branches won't get cloned in the final step.</p>
<pre><code>git checkout -b local_branch remote_branch
# It's OK if local_branch and remote_branch are the same names
</code></pre>
<p>Tags are imported as branches. You have to create a local branch, make a tag and delete the branch to have them as tags in Git. To do it with tag "v1":</p>
<pre><code>git checkout -b tag_v1 remotes/tags/v1
git checkout master
git tag v1 tag_v1
git branch -D tag_v1
</code></pre>
<p>Clone your GIT-SVN repository into a clean Git repository:</p>
<pre><code>git clone dest_dir-tmp dest_dir
rm -rf dest_dir-tmp
cd dest_dir
</code></pre>
<p>The local branches that you created earlier from remote branches will only have been copied as remote branches into the newly cloned repository. (Skip trunk/master.) For each branch you want to keep:</p>
<pre><code>git checkout -b local_branch origin/remote_branch
</code></pre>
<p>Finally, remove the remote from your clean Git repository that points to the now-deleted temporary repository:</p>
<pre><code>git remote rm origin
</code></pre>
| <h1>For this, I have used <strong>svn2git</strong> library with the following procedure:</h1>
<blockquote>
<p>sudo apt-get install git-core git-svn ruby<br>
sudo gem install svn2git<br>
svn log --quiet | grep -E "r[0-9]+ \| .+ \|" | cut -d'|' -f2 | sed 's/ //g' | sort | uniq > authors.txt (this command is for mapping the authors)</p>
</blockquote>
<p>Above step should be performed in the folder that you are going to convert from svn to git.</p>
<blockquote>
<p>Add one mapping per line in authors.txt like this</p>
</blockquote>
<pre><code>anand = Anand Tripathi <email_id>
trip = Tripathi Anand <email_id>
</code></pre>
<p>Create a folder for a new git repository and execute the command below having the path of authors.txt</p>
<pre><code>svn2git <svn_repo_path> --nobranches --notags --notrunk --no-minimize-url --username <user_name> --verbose --authors <author.txt_path>
If no trunk and no tag and branch is present then have to execute the above command else if root is trunk then mention rootistrunk or trunk is present then --trunk <trunk_name>
</code></pre>
<blockquote>
<p>git remote add origin <br>
git push --all origin <br>
git push --tags origin</p>
</blockquote>
| 10,454 |
<p>I want to execute a certain batch file and redirect its console output to a text control in visual c++ or redirect the console output at the same time the logs/echo are showing.</p>
| <p>Basically, you have to make the run process to write to a pipe, and to read the output of this pipe.</p>
<p>[EDIT] I know how SciTE does that (you can take a look at the source: win32/SciTEWin.cxx, ExecuteOne function), I searched a slightly more generic way, found <a href="http://support.microsoft.com/?scid=kb%3Ben-us%3B190351&x=15&y=14" rel="nofollow noreferrer" title="How to spawn console processes with redirected standard handles">How to spawn console processes with redirected standard handles</a> from Microsoft itself.<br>
If you seach <em>CreatePipe PeekNamedPipe CreateProcess</em> keyword, for example, you might find other examples.</p>
| <p>If elegance is not a priority then a really simple solution might be to redirect the output to a file, and then read in the file contents.</p>
| 15,384 |
<p>What is the difference between <code>bool</code> and <code>Boolean</code> types in C#?</p>
| <p><code>bool</code> is an alias for <code>System.Boolean</code> just as <code>int</code> is an alias for <code>System.Int32</code>. See a full list of aliases here: <a href="https://msdn.microsoft.com/en-us/library/ya5y69ds.aspx" rel="noreferrer">Built-In Types Table (C# Reference)</a>.</p>
| <p>Perhaps bool is a tad "lighter" than Boolean; Interestingly, changing this:</p>
<pre><code>namespace DuckbillServerWebAPI.Models
{
public class Expense
{
. . .
public bool CanUseOnItems { get; set; }
}
}
</code></pre>
<p>...to this:</p>
<pre><code>namespace DuckbillServerWebAPI.Models
{
public class Expense
{
. . .
public Boolean CanUseOnItems { get; set; }
}
}
</code></pre>
<p>...caused my cs file to sprout a "using System;" Changing the type back to "bool" caused the using clause's hair to turn grey.</p>
<p>(Visual Studio 2010, WebAPI project)</p>
| 16,316 |
<p>A table in my area of responsibility of our product has been criticised as having more than one orthogonal index.</p>
<p>What is an orthogonal index?<br>
Why is it bad?<br>
How can the situation be avoided?</p>
<p>--Update--<br>
The back-end database engine isn't necessarily relevant here as our application is database-agnostic. But if it helps, Oracle is one possibility.</p>
<p>The table in question isn't used for financial analysis.</p>
| <p>Orthogonal means independent of each other.</p>
<p>No idea why it would be bad. In fact, i usually use secondary indexes (besides the 'id' autoincrement primary key) when there's a common query that has nothing to do with the primary one.</p>
| <p>Othogonal simply means independent i.e. unrelated to the main concern.</p>
| 16,151 |
<p>I have a MySQL database of keywords that are presently mixed-case. However, I want to convert them all to lowercase. Is there an easy command to do this, either using MySQL or MySQL and PHP?</p>
| <pre><code>UPDATE table SET colname=LOWER(colname);
</code></pre>
| <p>I believe in php you can use</p>
<pre><code>strtolower()
</code></pre>
<p>so you could make a php to read all the entries in the table then use that command to print them back as lower case</p>
| 27,341 |
<p>I'm using the GoDiagrams suite which seems to recommend .emf files for node images since they scale better on resizing. Bitmaps get all blurry.<br>
Google doesn't show up any good tools that seem to do this... So to reiterate I'm looking for a image converter (preferably free) that converts an image (in one of the common formats like Bitmaps or JPEGs or GIFs) to an .EMF File.</p>
<p><em>Update: I dont need to do it via code. Simple batch-conversion of images will do.</em></p>
| <p><a href="http://www.inkscape.org/" rel="nofollow noreferrer">Inkscape</a> works well, it was recommended to me <a href="https://stackoverflow.com/questions/28872/free-windows-based-emf-editor">here</a>. </p>
| <p>Really funny one Microsoft. Now this might seem outlandish but it works... (I have Visio2007). Just found this out from a colleague</p>
<p>You can drop a JPEG into <strong>Microsoft Visio</strong> (no less), Do a 'Save As' to .emf and voila! nice quality of a picture too.</p>
| 7,750 |
<p>My application has just started exhibiting strange behaviour.</p>
<p>I can boot it through the Carbide Debugger (using TRK) and it works fine with no visible errors and is left installed on the device.</p>
<p>Any further attempts to launch the application fail, even after a restart. Uninstalling and downloading the .sisx file manually also doesn't work.</p>
<p>Has anyone had any experience like this? Could it be some resource file that is missing, or is there any other way I can find out what is happening?</p>
| <p>why do you need the IList ? </p>
<pre><code>static void SetValue2(this Array a, object value, int i) {
int[] indices = new int[a.Rank];
for (int d = a.Rank - 1; d >= 0; d--) {
var l = a.GetLength(d);
indices[d] = i % l;
i /= l
}
a.SetValue(value, indices);
}
</code></pre>
<p>Test Code:</p>
<pre><code>static void Main(string[] args) {
int[, ,] arr2 = {
{{0,1,2}, {3,4,5}, {6,7,8}},
{{9,10,11}, {12,13,14}, {15,16,17}},
{{18,19,20}, {21,22,23}, {24,25,26}}
};
for (int i = 0; i < arr2.Length; i++) {
arr2.SetValue2(30, i);
}
}
</code></pre>
| <p><code>SetValue()</code> should work. Take a look at <a href="http://msdn.microsoft.com/en-us/library/758awxk7.aspx" rel="nofollow noreferrer">this</a> for a little more inspiration.</p>
<p>EDIT: Could you not just do</p>
<pre><code>{{30,30,30}, {30,30,30}, {30,30,30}}
, {{30,30,30}, {30,30,30}, {30,30,30}}
, {{30,30,30}, {30,30,30}, {30,30,30}
}
</code></pre>
<p>As a side note, are you sure you want to return an <code>IList<int></code> from <code>getCumulativeLengths</code>?</p>
<p>I always thought, be generous on input, and strict on output. </p>
| 46,957 |
<p>I change the FontSize of Text in a Style trigger, this causes the Control containing the text to resize as well. How can I change the Fontsize without affecting the parent's size? </p>
| <p>A nice trick to isolate an element from its parent layout wise is to place the element in a Canvas</p>
<p>In the markup below there are two copies of your element
The first is hidden and establishes the size of your control
The second is visible but wrapped in a Canvas so its layout size does not affect the parent.</p>
<pre><code><Parent>
<Grid>
<Element Visibility="Hidden"/>
<Canvas>
<Element />
</Canvas>
<Grid>
</Parent>
</code></pre>
| <p>What kind of control are you using? If this is a HeaderedControl like a GroupBox or TabItem then you need to specifically set the HeaderTemplate like this:</p>
<pre><code><DataTemplate x:Key="MyHeaderTemplate">
<TextBlock Text="{Binding}" Fontsize="14" FontWeight="Bold" />
</DataTemplate>
</code></pre>
| 41,641 |
<p>I'm wondering how do you deal with displaying release revision number when pushing live new versions of your app?</p>
<p>You can use <code>$Rev$</code> in a file to get latest revision, but only after you update the file.</p>
<p>What if I want to update a string in one file every time I change any file in the repository/directory?</p>
<p>Is there a way?</p>
| <p>Did you try to use hooks? They work on server-side only but may do the trick. Otherwise I would just call a script do update the revision if the keywords aren't suitable for you.</p>
| <p>On the one project where I had a reason do this, I cheated: it calls <code>svnversion</code> on itself when it starts up.</p>
| 13,322 |
<p>I am currently working on an web application that uses ASP.NET 2.0 framework. I need to redirect to a certain page, say SessionExpired.aspx, when the user session expires. There are lot of pages in the project, so adding code to every page of the site is not really a good solution. I have MasterPages though, which I think might help.</p>
<p>Thanks!</p>
| <p>You can handle this in global.asax in the Session_Start event. You can check for a session cookie in the request there. If the session cookie exists, the session has expired:</p>
<pre><code> public void Session_OnStart()
{
if (HttpContext.Current.Request.Cookies.Contains("ASP.NET_SessionId") != null)
{
HttpContext.Current.Response.Redirect("SessionTimeout.aspx")
}
}
</code></pre>
<p>Alas I have not found any elegant way of finding out the name of the session cookie.</p>
| <p>Add or update your Web.Config file to include this or something similar:</p>
<pre><code><customErrors defaultRedirect="url" mode="RemoteOnly">
<error statusCode="408" redirect="~/SessionExpired.aspx"/>
</customErrors>
</code></pre>
| 16,974 |
<p>I've used MyGeneration, and I love it for generating code that uses Data Access Applicaiton Blocks from Microsoft for my Data Access Layer, and keeping my database concepts in sync with the domain I am modeling. Although, it took a steeper than expected learning curve one weekend to make it productive.</p>
<p>I'm wondering what others are doing related to code generation.</p>
<p><a href="http://www.mygenerationsoftware.com" rel="noreferrer">http://www.mygenerationsoftware.com</a></p>
<p><a href="http://www.codesmithtools.com/" rel="noreferrer">http://www.codesmithtools.com/</a></p>
<p>Others?</p>
<p>Back in 2000, or so, the company I worked for used a product from Veritas Software (I believe it was) to model components and generate code that integrated components (dlls). I didn't get a lot of experience with it, but it seems that code generation has been the "holy grail" for a long time. Is it practical? How are others using it?</p>
<p>Thanks!</p>
| <p>T4 is the CodeSmith killer for Microsoft!!!!</p>
<p>Go check it out. Microsoft doesn't want to destroy their partners so they don't advertise it, but it is a thing to be reckoned with and ITS FREE and comes installed in Visual Studio 2008.</p>
<p><a href="http://www.olegsych.com" rel="nofollow noreferrer">www.olegsych.com</a></p>
<p><a href="http://www.codeplex.com/t4toolbox" rel="nofollow noreferrer">codeplex.com/t4toolbox</a></p>
<p><a href="http://www.t4editor.net" rel="nofollow noreferrer">www.t4editor.net</a></p>
| <p>I have used CodeSmith. Was pretty helpful.</p>
| 11,915 |
<p>I have a simple email address sign up form as follows:</p>
<pre><code><form action="" id="newsletterform" method="get">
<input type="text" name="email" class="required email" id="textnewsletter" />
<input type="submit" id="signup" />
</form>
</code></pre>
<p><strong>Here's what I want to be able to do:</strong></p>
<ul>
<li>Validate the form to look for an empty string or a incorrectly filled out email address one the user clicks submit or hits enter.</li>
<li>If one of the above happens (empty string etc), I would like to generate an error to let the user know.</li>
<li>Then once the user fills out a correctly formed email address and hits submit (or enter) I want the form to send the email address to wherever I specify in the jQuery code and then generate a little "Thank you for signing up notice", all without reloading the browser.</li>
</ul>
<p>I have looked at too many tutorials and my eyes are pretty much aching at this stage, so please don't point me to any urls (I most likely have been there).</p>
<p>If someone could provide a barebone outline of what to do It would be so much appreciated. </p>
| <p>First, please be sure you do all of your validation on the server-side. I like to get my forms working without any JavaScript whatsoever. I am assuming you have done that much.</p>
<p>****ORIGINAL ANSWER***</p>
<p>Then, change your "submit" element to a button element. On the OnClick of the button element, run a JavaScript function that validates. Lots of samples on how to do that as you know.</p>
<p>If the validation fails, send up alerts. If it is successful, use JavaScript to submit the form.</p>
<p>****NEW, TOOL USING ANSWER***</p>
<p>You can also employ JQuery as (orip points out) and it's plugins to do this. They handle a lot of the hard work. Please make sure my comments are telling the correct story. this code also does the AJAX submitting.</p>
<pre><code><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<!-- Load JQuery on your page -->
<script src="http://code.jquery.com/jquery-latest.js"></script>
<!-- Load JQuery validation sytles and (rules?) on your page -->
<link rel="stylesheet" href="http://dev.jquery.com/view/trunk/plugins/validate/jquery.validate.css" type="text/css" media="screen" />
<!-- Load JQuery validation plugin on your page -->
<script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/validate/jquery.validate.js"></script>
<!-- Load JQuery form plugin on your page -->
<script type="text/javascript" src="http://jqueryjs.googlecode.com/svn/trunk/plugins/form/jquery.form.js"></script>
<script type="text/javascript" language="javascript">
//Wait until the document is loaded, then call the validation. Due to magic in JQuery or the plugin
// this only happens when the form is submitted.
$(document).ready(function(){
//When the submit button is clicked
$("#signup").click(function() {
//if the form is valid according to the fules
if ($("#newsletterform").valid()) {
//Submit the form via AJAX
$('#newsletterform').ajaxForm(function() {
//this alert lets me know the submission was successfull
alert("Thank you!"); });
}
})
});
</script>
<!-- Just some styles -->
<style type="text/css">
* { font-family: Verdana; font-size: 96%; }
label { width: 10em; float: left; }
label.error { float: none; color: red; padding-left: .5em; vertical-align: top; }
p { clear: both; }
.submit { margin-left: 12em; }
em { font-weight: bold; padding-right: 1em; vertical-align: top; }
</style>
</head>
<body>
<form action="" id="newsletterform" method="get">
<!-- The classes assigned here are where the validation rules come fome.
This is required, and it must be an email -->
<input type="text" name="email" class="required email" id="textnewsletter" />
<input type="submit" id="signup" />
</form>
</body>
</html>
</code></pre>
<p>This isn't the tightest code you could write, but it will serve as an example.</p>
| <p>Thanks for all your help guys.</p>
<p>I have a solution that works perfectly the way I want (had to hire somebody :) - Anywho, for anybody else that needs it, here you go:</p>
<pre><code>$(document).ready(function () {
$('#textnewsletter').click(function ()
{
if($('#textnewsletter').val()=='Your email address')
$(this).attr("value",'');
});
$('form#newslattersub').submit(function ()
{
if(!isEmail($('#textnewsletter').val() ))
{
$('p.idmsg').html('<span class="error">Please enter a valid email address</span>').hide().fadeIn("slow");
}
else{
$.post($('form#newslattersub').attr('action'), { email:$('#textnewsletter').val() },function(data){
$('p.idmsg').html('<span class="success">Thanks for signing up! Please check your email for confirmation!</span>').hide().fadeIn("slow");
//alert("server return " + data);
});
}
return false;
});
});
</code></pre>
| 46,560 |
<p>My goal is to maintain a web file server separately from my main ASP.NET application server for better scalability. The web file server will store a lot of files downloaded by users.<br>
So the question is: Is it worth to adopt FreeBSD + Apache + ZFS, or will good old IIS be сonvenient enough?</p>
| <p>I understand you will serve only static files. In this case, lightweight HTTP servers will give you a higher performance for a given machine. The following are well known:</p>
<ul>
<li><a href="http://en.wikipedia.org/wiki/Lighttpd" rel="nofollow noreferrer">Lighttpd</a></li>
<li><a href="http://en.wikipedia.org/wiki/Thttpd" rel="nofollow noreferrer">Thttpd</a></li>
<li><a href="http://www.nginx.net/" rel="nofollow noreferrer">Nginx</a></li>
</ul>
<p>Many more are listed on <a href="http://en.wikipedia.org/wiki/Tiny_web_servers" rel="nofollow noreferrer">Wikipedia</a>. There's a more recent <a href="http://www.ibm.com/developerworks/web/library/wa-ltwebserv/" rel="nofollow noreferrer">article</a> on IBM DeveloperWorks.</p>
| <p>If you're serving files over the Internet, you might also consider <a href="http://www.amazon.com/gp/browse.html?node=16427261" rel="nofollow noreferrer">Amazon's S3 service</a>. I've found the rates and reliability to be better than anything I could do (or find) on my own.</p>
| 4,746 |
<p>I have two processes one will query other for data.There will be huge amount of queries in a limited time (10000 per second) and data (>100 mb) will be transferred per second.Type of data will be an integral type(double,int)
My question is in which way to connect this process?</p>
<p>Shared memory , message queue , lpc(Local Procedure call) or others....</p>
<p>And also i want to ask which library you suggest? by the way please do not suggest MPI.
edit : under windows xp 32 bit </p>
| <p>One Word: <a href="https://www.boost.org/doc/libs/1_74_0/doc/html/interprocess.html" rel="noreferrer">Boost.InterProcess</a>. If it really needs to be fast, shared memory is the way to go. You nearly have zero overhead as the operation system does the usual mapping between virtual and physical addresses and no copy is required for the data. You just have to lookout for concurrency issues.</p>
<p>For actually sending commands like <em>shutdown</em> and <em>query</em>, I would use message queues. I previously used localhost network programming to do that, and used manual shared memory allocation, before i knew about boost. Damn if i would need to rewrite the app, I would immediately pick boost. Boost.InterProcess makes this more easy for you. Check it out.</p>
| <p>I'll second Marc's suggestion -- I'd not bother with boost unless you have a portability concern or want to do cool stuff like map standard container types over shared memory (in which case I'd definitely use boost). </p>
<p>Otherwise, message queues and shared memory are pretty simple to deal with.</p>
| 48,649 |