text
stringlengths
14
21.4M
Q: Homeomorphisms of disjoint unions and unions in a metric space. Suppose $A$ and $B$ are disjoint subsets of a metric space $X$, equip $A$, $B$, and $A\cup B$ with the subspace topology. Suppose $d(x,y)\geq \delta$ for all $x\in A, y\in B$ and some $\delta>0$. I want to show that $A\cup B$ is homeomorphic to $A\sqcup B$. First let me construct a function $$ f: A\cup B\to A\sqcup B $$ If $a\in A$ then $f(a)=(a,0)$, if $b\in B$ then $f(b)=(b,1)$. The sets $A$ and $B$ are disjoint, so $f$ is well-defined, the inverse of $f$ is $$ f^{-1}: A\sqcup B\to A\cup B \\ (a,0)\mapsto a\in A \\ (b,1) \mapsto b\in B\\ $$ $f$ is clearly a bijection, to see that it is a homeomorphism, consider the inverse image of an open subset $(X\times \{ 0\})\cup (Y\times \{1\})$, we get $$ f^{-1}((X\times \{ 0\})\cup (Y\times \{1\}))=X\cup Y $$ For $x\in X$ and $y\in Y$ let $B(\epsilon_x, x), B(\epsilon_y, y)$ be open balls, we can choose $\epsilon_x\leq \epsilon_y<\delta$, so that the intersection of the balls is empty and the balls are contained in $X$ and $Y$ respectively, so $X\cup Y$ is an open set as well. Now consider the inverse image of an open set $X\cup Y\subset A\cup B$ under $f^{-1}$, we have $$ (f^{-1})^{-1}(X\cup Y)=(X\times\{0\})\cup(Y\times \{1\}), $$ which is an open set. I am not sure if my proof is correct at all, and why does it need the requirement about minimal distance between $X$ and $Y$? A: Consider $A = \mathbb{Z}$, and $B = \mathbb{R} \setminus \mathbb{Z}$, inside $\mathbb{R}$ with the usual topology. These are disjoint, but $\mathbb{R}$ is not isomorphic to $\mathbb{Z} \coprod \mathbb{R}\setminus \mathbb{Z}$, since e.g. no point of $\mathbb{R}$ is open. You always have a map $A \coprod B \rightarrow A \cup B$ which is cts, the $\delta > 0$ assumption gives the openness of the map. A: Your proof works. One remark: At the point where you talk about the balls $B(\epsilon_x,x)$, you should always make clear if you are referring to the ball relative to $X$ or to $X\cup Y$. One could say: Since $X$ is open in $A$, there is an $ϵ_x>0$ such that $B^A(ϵ_x,x)=\{a\in A\mid d(x,a)<ϵ_x\}$ is a subset of $X$. Then if $ϵ_x<\delta$, it follows that $B(ϵ_x,x)$ is disjoint from $B$, so $B^{A\cup B}(ϵ_x,x)=B^A(ϵ_x,x)$. Hence $B^{A\cup B}(ϵ_x,x)$ is a subset of $X$, so $X$ is open in $A\cup B$. Others have already posted some counterexamples where $A$ and $B$ are not a positive distance apart. Note that this isn't necessary. It also works if $\overline A\cap B$ and $A\cap\overline B$ are empty ($A,B$ are separated sets). Actually it is just a matter of connectedness. The $f:A\cup B\to A⊔B$ being a continuous maps, is just a restatement of $A$ and $B$ being each open in $A\cup B$, which says that $A\cup B$ is disconnected into $A$ and $B.$ A: Your continuity justifications are a bit shaky. (Also, you used $X$ to mean two separate things.) To see that $f^{-1}$ is continuous, take an arbitrary open subset $U$ of $A\cup B$. Then $U=Y\cup Z,$ where $Y=U\cap A$ and $Z=U\cap V$. By definition of subspace topology, we have that $Y$ is open in $A$ and $Z$ is open in $B$. By definition of disjoint union topology, we have that $Y\times\{0\}$ and $Z\times\{1\}$ are open in $A\sqcup B,$ and so $$(f^{-1})^{-1}[U]=(Y\times\{0\})\cup(Z\times\{1\})$$ is open in $A\sqcup B.$ Note that our $\delta$ didn't come into play, here. Now, to show that $f$ is continuous, we are going to need to use that $\delta$. Take an arbitrary open subset $U$ of $A\sqcup B,$ so that $U=(Y\times\{0\})\cup(Z\times\{1\})$ for some $Y$ relatively open in $A$ and some $Z$ relatively open in $B,$ by definition of disjoint union topology, and so $$f^{-1}(U)=Y\cup Z.$$ Since $Y$ is relatively open in $A,$ then $Y=V\cap A$ for some open subset $V\subseteq X.$ Likewise, $Z=W\cap B$ for some open subset $W\subseteq X.$ Without loss of generality, we may suppose that $W\cap A=V\cap B=\emptyset,$ for if not, then we can put $$V'=V\cap\{x\in X:d(x,a)<\delta\text{ for some }a\in A\}$$ and $$W'=W\cap\{x\in X:d(x,b)<\delta\text{ for some }b\in B\}.$$ Then $V',W'$ can be shown to be open, $Y=V'\cap A$ and $Z=W'\cap B$, and because of the condition with $\delta$ we can see that $W'\cap A=V'\cap B=\emptyset$. Now, it then follows that $V\cup W$ is open in $X$, so $(V\cup W)\cap(A\cup B)$ is open in $A\cup B,$ but by our assumption, we have that $$(V\cup W)\cap(A\cup B)=Y\cup Z,$$ as desired. This is due to distributivity of unions and intersections over each other, as $$\begin{align}Y\cup Z &= (V\cap A)\cup(W\cap B)\\ &= \bigl((V\cap A)\cup\emptyset\bigr)\cup\bigl(\emptyset\cup(W\cap B)\bigr)\\ &= \bigl((V\cap A)\cup(V\cap B)\bigr)\cup\bigl((W\cap A)\cup(W\cap B)\bigr)\\ &= \bigl(V\cap(A\cup B)\bigr)\cup\bigl(W\cap (A\cup B)\bigr)\\ &= (V\cup W)\cap(A\cup B).\end{align}$$
Q: How do I load nested key value pairs from a properties file into a Java object using Spring? I understand how to use Spring with the PropertyPlaceholderConfigurer to load a .properties file when we know what properties to expect, and use @Value to store those values into variables or some object. However, how do I have Spring load up a properties file with nested key,value pairs when the keys can vary? For example, lets say I had the following car.properties file: Chevy=Corvette:String,1234567890:long,sportsCar:String Honda=Odyssey:String,2345678910:long,minivan:String Ford=F350:String,4567891011:long,truck:String where each line of the properties file has a key which is the make, followed by three nested key,value pairs i.e., one for the model, one for the VIN, and one for the vehicle type i.e., <make>=<model>:<dataType>,<vin>:<dataType>,<vehicleType>:<dataType> I'm using this structure since future vehicles will be added later, and I don't want to change my underlying Java code. And let's say I want to use these vehicle properties to generate some random data about vehicles for testing. How would I use Spring to load each line of the properties file as a collection of vehicle values to be stored in an arraylist? I'm figuring I'd have a 2D arraylist where each of these vehicles would be an arraylist inside the "all vehicles" arraylist. Then I would randomly select one of the vehicle arraylists to generate dummy vehicle data. Anyway, I think I'm on the right track, but just can't seem to figure out how I would load my nested key,value pairs using Spring. Any suggestions? UPDATED context.xml that works for me: Btw, here is the context.xml I'm using: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.0.xsd"> <!-- creates a java.util.Properties instance with values loaded from the supplied location --> <util:properties id="carProperties" location="classpath:/car.properties"/> <bean class="com.data.rcgen.generator.CarLoader"> <property name="sourceProperties" ref="carProperties" /> </bean> </beans> A: There is no way spring will do this one for you. You will need to implement the parsing yourself. However, spring can provide some convenience utility classes for you: * *You can load the properties file via util:properties config element *You can use BeanWrapperImpl to help you set properties on your custom beans. Example (might contain typos): <util:properties id="carProperties" location="classpath:car.properties"/> <bean class="my.package.CarLoader"> <property name="sourceProperties" ref="carProperties" /> </bean> public class Car { private String name; private String category; // ... Getters and setters } public class CarLoader { private Properties sourceProperties; public List<Car> getCars() { List<Car> cars = new ArrayList<Car>(); for (Object key : sourceProperties.keySet()) { // Do the parsing - naive approach String[] values = sourceProperties.getProperty((String) key).split(","); // Create bean wrapper and set the parsed properties... will handle data convesions with // default property editors or can use custom ConversionService via BeanWrapper#setConversionService BeanWrapper wrappedCar = PropertyAccessorFactory.forBeanPropertyAccess(new Car()); wrappedCar.setPropertyValue("name", values[0].split(":")[0]); // Getting rid of the `:type` wrappedCar.setPropertyValue("category", values[2].split(":")[0]); // Getting rid of the `:type` // Phase 3 - prosper cars.add((Car) wrappedCar.getWrappedInstance()); } return cars; } public void setSourceProperties(Properties properties) { this.sourceProperties = properties; } } UPDATE basic example how to bootstrap application context from main method: public class Main { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("context.xml"); CarLoader carLoader = context.getBean(CarLoader.class); for (Car car : carLoader.getCars()) { System.out.println("CAR - " + car.getName()); } } }
Q: Get CPU for process in powershell I want to get the value that is shown in task manager for any process in the CPU column in powershell. I tried using Get-Process ProcessName | Select-Object -Property CPU but it only returns the time spent. A: Try using the Get-Counter command which pulls the data from the system's performance monitor. For your example, it would look like this: # ~> Get-Counter "\Process(ProcessName*)\% Processor Time" | select -expand countersamples An example, using chrome: # ~> Get-Counter "\Process(chrome*)\% Processor Time" | select -expand countersamples Path InstanceName CookedValue ---- ------------ ----------- \\machinename\process(chrome#7)\% processor time chrome 0 \\machinename\process(chrome#6)\% processor time chrome 0 \\machinename\process(chrome#5)\% processor time chrome 0 \\machinename\process(chrome#4)\% processor time chrome 0 \\machinename\process(chrome#3)\% processor time chrome 0 \\machinename\process(chrome#2)\% processor time chrome 0 \\machinename\process(chrome#1)\% processor time chrome 0 \\machinename\process(chrome)\% processor time chrome 3.10141153081511 A: Here are some copy-pastable examples of filtering processes with Get-Counter and Get-WmiObject. For example, to get the top 10 processes by CPU usage: powershell "(Get-Counter '\Process(*)\% Processor Time').Countersamples | Sort cookedvalue -Desc| Select -First 10 instancename, cookedvalue" Or, with cleaner formatting: powershell "(Get-Counter '\Process(*)\% Processor Time').Countersamples | Sort cookedvalue -Desc | Select -First 10 instancename, @{Name='CPU %';Expr={[Math]::Round($_.CookedValue)}}" powershell "gwmi Win32_PerfFormattedData_PerfProc_Process | Sort PercentProcessorTime -desc | Select -first 7 Name, PercentProcessorTime, IOReadBytesPersec, IOWriteBytesPersec, WorkingSet | ft -autoformat" A: The simpler way I would go on achieving this is get-process processName | where-object {$_.cpu -gt desirevalue}
Q: Read JTextPane line by line Is there a way to read the contents of a JTextPane line by line? Much like the BufferedReader? A: Element root = textPane.getDocument().getDefaultRootElement(); Once you get the root Element you can check to see how many child elements (ie. lines) exist. Then you can get each child Element and use the start/end offset methods to get the text for that particular line. This would be more efficient than getting all the text in one big string and then splitting it again. A: The way that I've done this in the past is to use the getText method over my pane, then parse the string that is returned looking for the newline character '\n'. A: Can you explain what you're trying to do? From the top of the head I can't say if it is possible actually to read it line by line. Of course you could just split the text by the newline character and then you would get an array of strings, each line as its own element. Is this a problem in your case?
Q: Rotate image with ffmpeg I just figure out how to rotate image with FFmpeg. But issue is when Filename contain "%" sign. Then that command not work. ffmpeg -y -i '/mypath/Prat%eek.jpg' -vf transpose=2 '/mypath/Prat%eek.jpg' A: As per the documentation of ffmpeg it is mentioned that each of the special characters %*?[]{} should be escaped by %. All glob special characters %*?[]{} must be prefixed with "%". To escape a literal "%" you shall use "%%". So the above command, should be ffmpeg -y -i '/mypath/Prat%%eek.jpg' -vf transpose=2 '/mypath/Prat%%eek.jpg' EDIT After using the above command on linux I found it was not working, and in documentation it is also mentioned that For example the pattern foo-%*.jpeg will match all the filenames prefixed by "foo-" and terminating with ".jpeg", and foo-%?%?%?.jpeg will match all the filenames prefixed with "foo-", followed by a sequence of three characters, and terminating with ".jpeg". So I tried with this command, ffmpeg -y -i '/mypath/Prat%?eek.jpg' -vf transpose=2 '/mypath/Prat%%eek.jpg' The above command worked. Edit I did not find in any documentation that source file name should be escaped some other way and destination path should be escaped other way but as per the above command working, I think so, * *All source path should be escaped by using ? sign (after escape characters) when working with ffmpeg command working on single file. *Destination path should be escaped by % in destination path.
Q: Divide rows with date in SQL Server 2014 I have a problem with SQL. I have the following table: declare @t table (START_DATE datetime, END_DATE datetime, GROSS_SALES_PRICE decimal(10,2) ); insert into @t values ('2014-08-06 00:00:00.000', '2014-10-06 23:59:59.000', 29.99), ('2014-09-06 00:00:00.000', '2014-09-09 23:59:59.000', 32.99), ('2014-09-10 00:00:00.000', '2014-09-30 23:59:59.000', 32.99), ('2014-10-07 00:00:00.000', '2049-12-31 23:59:59.000', 34.99) I would like to separate the dates which overlaps. For example I have in the first row START_DATE 2014-08-06 and END_DATE 2014-10-06. We can see that the dates from the second and the third row are inside this period of time from first row. So I would like to separate them as follows: declare @t2 table (START_DATE datetime, END_DATE datetime, GROSS_SALES_PRICE decimal(10,2) ); insert into @t2 values ('2014-08-06 00:00:00.000', '2014-09-05 23:59:59.000', 29.99), ('2014-09-06 00:00:00.000', '2014-09-09 23:59:59.000', 32.99), ('2014-09-10 00:00:00.000', '2014-09-30 23:59:59.000', 32.99), ('2014-10-01 00:00:00.000', '2014-10-06 23:59:59.000', 29.99), ('2014-10-07 00:00:00.000', '2049-12-31 23:59:59.000', 34.99) So the second and the third rows remained unchanged. The first row should have new END_DATE. We also have new row. The GROSS_SALES_PRICE should remain as it is in internal period. Thanks for help. I am using SQL Server 2014 A: A calendar/dates table can simplify this, but we can also use a query to generate a temporary dates table using a common table expression. From there, we can solve this as a gaps and islands style problem. Using the dates table and using outer apply() to get the latest values for start_date and gross_sales_price we can identify the groups we want to re-aggregate by using two row_number()s. The first just ordered by date, less the other that is partitioned by the value we have as the latest start_date and ordered by date. Then you can dump the results of the common table expression src to a temporary table and do your inserts/deletes using that or you can use merge using src. /* -- dates --*/ declare @fromdate datetime, @thrudate datetime; select @fromdate = min(start_date), @thrudate = max(end_date) from #t; ;with n as (select n from (values(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) t(n)) , dates as ( select top (datediff(day, @fromdate, @thrudate)+1) [Date]=convert(datetime,dateadd(day,row_number() over(order by (select 1))-1,@fromdate)) , [End_Date]=convert(datetime,dateadd(millisecond,-3,dateadd(day,row_number() over(order by (select 1)),@fromdate))) from n as deka cross join n as hecto cross join n as kilo cross join n as tenK cross join n as hundredK order by [Date] ) /* -- islands -- */ , cte as ( select start_date = d.date , end_date = d.end_date , x.gross_sales_price , grp = row_number() over (order by d.date) - row_number() over (partition by x.start_date order by d.date) from dates d outer apply ( select top 1 l.start_date, l.gross_sales_price from #t l where d.date >= l.start_date and d.date <= l.end_date order by l.start_date desc ) x ) /* -- aggregated islands -- */ , src as ( select start_date = min(start_date) , end_date = max(end_date) , gross_sales_price from cte group by gross_sales_price, grp ) /* -- merge -- */ merge #t with (holdlock) as target using src as source on target.start_date = source.start_date and target.end_date = source.end_date and target.gross_sales_price = source.gross_sales_price when not matched by target then insert (start_date, end_date, gross_sales_price) values (start_date, end_date, gross_sales_price) when not matched by source then delete output $action, inserted.*, deleted.*; /* -- results -- */ select start_date , end_date , gross_sales_price from #t order by start_date rextester demo: http://rextester.com/MFXCQQ90933 merge output (you do not need to output this, just showing for the demo): +---------+---------------------+---------------------+-------------------+---------------------+---------------------+-------------------+ | $action | START_DATE | END_DATE | GROSS_SALES_PRICE | START_DATE | END_DATE | GROSS_SALES_PRICE | +---------+---------------------+---------------------+-------------------+---------------------+---------------------+-------------------+ | INSERT | 2014-10-01 00:00:00 | 2014-10-06 23:59:59 | 29.99 | NULL | NULL | NULL | | INSERT | 2014-08-06 00:00:00 | 2014-09-05 23:59:59 | 29.99 | NULL | NULL | NULL | | DELETE | NULL | NULL | NULL | 2014-08-06 00:00:00 | 2014-10-06 23:59:59 | 29.99 | +---------+---------------------+---------------------+-------------------+---------------------+---------------------+-------------------+ results: +-------------------------+-------------------------+-------------------+ | start_date | end_date | gross_sales_price | +-------------------------+-------------------------+-------------------+ | 2014-08-06 00:00:00.000 | 2014-09-05 23:59:59.997 | 29.99 | | 2014-09-06 00:00:00.000 | 2014-09-09 23:59:59.997 | 32.99 | | 2014-09-10 00:00:00.000 | 2014-09-30 23:59:59.997 | 32.99 | | 2014-10-01 00:00:00.000 | 2014-10-06 23:59:59.997 | 29.99 | | 2014-10-07 00:00:00.000 | 2049-12-31 23:59:59.997 | 34.99 | +-------------------------+-------------------------+-------------------+ calendar and numbers tables reference: * *Generate a set or sequence without loops 2- Aaron Bertrand *Creating a Date Table/Dimension in SQL Server 2008 - David Stein *Calendar Tables - Why You Need One - David Stein *Creating a date dimension or calendar table in SQL Server - Aaron Bertrand merge reference: * *Use Caution with SQL Server''s MERGE Statement - Aaron Bertrand *UPSERT Race Condition With Merge - Dan Guzman *An Interesting MERGE Bug - Paul White *Can I optimize this merge statement - Aaron Bertrand *If you are using indexed views and MERGE, please read this! - Aaron Bertrand *The Case of the Blocking Merge Statement (LCK_M_RS_U locks) - Kendra Little *Writing t-sql merge statements the right way - David Stein A: In addition to using datetime2 type instead of datetime, I'd recommend you to use [Closed; Open) intervals instead of [Closed; Closed]. In other words, use 2014-08-06 00:00:00.000, 2014-09-06 00:00:00.000 instead of 2014-08-06 00:00:00.000, 2014-09-05 23:59:59.000. Specifically, because 59.999 will be rounded to 00.000 for the datetime type, but will not for datetime2(3). You don't want to depend on such internal details of the data types. Also, [Closed; Open) intervals are much easier to deal with in the queries as you'll see below. The main idea is to put all start and end dates (boundaries) together in one list with a flag that indicates whether it is a beginning or end of the interval. When a running total of the flag turns into zero, it means that all overlapping intervals have ended. Sample data I extended your sample data with several cases of overlapping intervals. declare @t table (START_DATE datetime2(0), END_DATE datetime2(0), GROSS_SALES_PRICE decimal(10,2) ); insert into @t values -- |------| 11 ('2001-01-01 00:00:00', '2001-01-10 00:00:00', 11), -- |------| 10 -- |------| 20 ('2010-01-01 00:00:00', '2010-01-10 00:00:00', 10), ('2010-01-05 00:00:00', '2010-01-20 00:00:00', 20), -- |----------| 30 -- |------| 40 ('2010-02-01 00:00:00', '2010-02-20 00:00:00', 30), ('2010-02-05 00:00:00', '2010-02-20 00:00:00', 40), -- |----------| 50 -- |----------| 60 ('2010-03-01 00:00:00', '2010-03-20 00:00:00', 50), ('2010-03-01 00:00:00', '2010-03-20 00:00:00', 60), -- |----------| 70 -- |------| 80 ('2010-04-01 00:00:00', '2010-04-20 00:00:00', 70), ('2010-04-05 00:00:00', '2010-04-15 00:00:00', 80), -- |-----------------------------| 29.99 -- |---------| 32.99 -- |---------| 32.99 -- |----------| 34.99 ('2014-08-06 00:00:00', '2014-10-07 00:00:00', 29.99), ('2014-09-06 00:00:00', '2014-09-10 00:00:00', 32.99), ('2014-09-10 00:00:00', '2014-10-01 00:00:00', 32.99), ('2014-10-07 00:00:00', '2050-01-01 00:00:00', 34.99); Query WITH CTE_Boundaries AS ( SELECT START_DATE AS dt ,+1 AS Flag ,GROSS_SALES_PRICE AS Price FROM @T UNION ALL SELECT END_DATE AS dt ,-1 AS Flag ,GROSS_SALES_PRICE AS Price FROM @T ) ,CTE_Intervals AS ( SELECT dt ,Flag ,Price ,SUM(Flag) OVER (ORDER BY dt, Flag ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS SumFlag ,LEAD(dt) OVER (ORDER BY dt, Flag) AS NextDate ,LEAD(Price) OVER (ORDER BY dt, Flag) AS NextPrice FROM CTE_Boundaries ) SELECT dt AS StartDate ,NextDate AS EndDate ,CASE WHEN Flag = 1 THEN Price ELSE NextPrice END AS Price FROM CTE_Intervals WHERE SumFlag > 0 AND dt <> NextDate ORDER BY StartDate ; Result +---------------------+---------------------+-------+ | StartDate | EndDate | Price | +---------------------+---------------------+-------+ | 2001-01-01 00:00:00 | 2001-01-10 00:00:00 | 11.00 | | 2010-01-01 00:00:00 | 2010-01-05 00:00:00 | 10.00 | | 2010-01-05 00:00:00 | 2010-01-10 00:00:00 | 20.00 | | 2010-01-10 00:00:00 | 2010-01-20 00:00:00 | 20.00 | | 2010-02-01 00:00:00 | 2010-02-05 00:00:00 | 30.00 | | 2010-02-05 00:00:00 | 2010-02-20 00:00:00 | 40.00 | | 2010-03-01 00:00:00 | 2010-03-20 00:00:00 | 60.00 | | 2010-04-01 00:00:00 | 2010-04-05 00:00:00 | 70.00 | | 2010-04-05 00:00:00 | 2010-04-15 00:00:00 | 80.00 | | 2010-04-15 00:00:00 | 2010-04-20 00:00:00 | 70.00 | this is your sample data: | 2014-08-06 00:00:00 | 2014-09-06 00:00:00 | 29.99 | | 2014-09-06 00:00:00 | 2014-09-10 00:00:00 | 32.99 | | 2014-09-10 00:00:00 | 2014-10-01 00:00:00 | 32.99 | | 2014-10-01 00:00:00 | 2014-10-07 00:00:00 | 29.99 | | 2014-10-07 00:00:00 | 2050-01-01 00:00:00 | 34.99 | +---------------------+---------------------+-------+ Intermediary result of CTE_Intervals Examine these to understand how the query works +---------------------+------+-------+---------+---------------------+-----------+ | dt | Flag | Price | SumFlag | NextDate | NextPrice | +---------------------+------+-------+---------+---------------------+-----------+ | 2001-01-01 00:00:00 | 1 | 11.00 | 1 | 2001-01-10 00:00:00 | 11.00 | | 2001-01-10 00:00:00 | -1 | 11.00 | 0 | 2010-01-01 00:00:00 | 10.00 | | 2010-01-01 00:00:00 | 1 | 10.00 | 1 | 2010-01-05 00:00:00 | 20.00 | | 2010-01-05 00:00:00 | 1 | 20.00 | 2 | 2010-01-10 00:00:00 | 10.00 | | 2010-01-10 00:00:00 | -1 | 10.00 | 1 | 2010-01-20 00:00:00 | 20.00 | | 2010-01-20 00:00:00 | -1 | 20.00 | 0 | 2010-02-01 00:00:00 | 30.00 | | 2010-02-01 00:00:00 | 1 | 30.00 | 1 | 2010-02-05 00:00:00 | 40.00 | | 2010-02-05 00:00:00 | 1 | 40.00 | 2 | 2010-02-20 00:00:00 | 30.00 | | 2010-02-20 00:00:00 | -1 | 30.00 | 1 | 2010-02-20 00:00:00 | 40.00 | | 2010-02-20 00:00:00 | -1 | 40.00 | 0 | 2010-03-01 00:00:00 | 50.00 | | 2010-03-01 00:00:00 | 1 | 50.00 | 1 | 2010-03-01 00:00:00 | 60.00 | | 2010-03-01 00:00:00 | 1 | 60.00 | 2 | 2010-03-20 00:00:00 | 50.00 | | 2010-03-20 00:00:00 | -1 | 50.00 | 1 | 2010-03-20 00:00:00 | 60.00 | | 2010-03-20 00:00:00 | -1 | 60.00 | 0 | 2010-04-01 00:00:00 | 70.00 | | 2010-04-01 00:00:00 | 1 | 70.00 | 1 | 2010-04-05 00:00:00 | 80.00 | | 2010-04-05 00:00:00 | 1 | 80.00 | 2 | 2010-04-15 00:00:00 | 80.00 | | 2010-04-15 00:00:00 | -1 | 80.00 | 1 | 2010-04-20 00:00:00 | 70.00 | | 2010-04-20 00:00:00 | -1 | 70.00 | 0 | 2014-08-06 00:00:00 | 29.99 | | 2014-08-06 00:00:00 | 1 | 29.99 | 1 | 2014-09-06 00:00:00 | 32.99 | | 2014-09-06 00:00:00 | 1 | 32.99 | 2 | 2014-09-10 00:00:00 | 32.99 | | 2014-09-10 00:00:00 | -1 | 32.99 | 1 | 2014-09-10 00:00:00 | 32.99 | | 2014-09-10 00:00:00 | 1 | 32.99 | 2 | 2014-10-01 00:00:00 | 32.99 | | 2014-10-01 00:00:00 | -1 | 32.99 | 1 | 2014-10-07 00:00:00 | 29.99 | | 2014-10-07 00:00:00 | -1 | 29.99 | 0 | 2014-10-07 00:00:00 | 34.99 | | 2014-10-07 00:00:00 | 1 | 34.99 | 1 | 2050-01-01 00:00:00 | 34.99 | | 2050-01-01 00:00:00 | -1 | 34.99 | 0 | NULL | NULL | +---------------------+------+-------+---------+---------------------+-----------+ A: How about using Lead to find the value from the next row: SELECT START_DATE, CASE WHEN LEAD(Start_Date) OVER (ORDER BY Start_Date) < END_DATE THEN COALESCE(DATEADD(s, -1, LEAD(Start_Date) OVER (ORDER BY Start_Date)), END_Date) ELSE END_DATE END AS End_Date, GROSS_SALES_PRICE FROM @t Or using a common table expression: ;WITH CTE AS ( SELECT Start_date, End_Date, LEAD(Start_Date) OVER (ORDER BY Start_Date) AS NextStartDate, GROSS_SALES_PRICE FROM @t ) SELECT START_DATE, CASE WHEN NextStartDate < END_DATE THEN Coalesce(DATEADD(s, -1, NextStartDate), End_Date) ELSE End_date END As End_Date, GROSS_SALES_PRICE FROM CTE Updated to add missing row: ;WITH CTE AS ( SELECT Start_date, End_Date, LAG(END_Date) OVER (ORDER BY Start_Date) AS PreviousEndDate, LEAD(Start_Date) OVER (ORDER BY Start_Date) AS NextStartDate, GROSS_SALES_PRICE FROM @t ) SELECT START_DATE, CASE WHEN NextStartDate < END_DATE THEN Coalesce(DATEADD(s, -1, NextStartDate), End_Date) ELSE End_date END As End_Date, GROSS_SALES_PRICE FROM CTE UNION ALL SELECT DATEADD(s, 1, PreviousEndDate), DATEADD(s, -1, Start_Date), GROSS_SALES_PRICE FROM CTE WHERE DATEDIFF(s, PreviousEndDate,Start_Date) > 1 ORDER BY 1 A: Note: following solution comes with few assumptions [1] It's using LEAD function => SQL2012+ [2] All DATETIME columns are mandatory => NOT NULL [3] All DATETIME values (across both columns) are unique. select y.* from ( select t.ID, x.DT AS NEW_START_DATE, DATEADD(MILLISECOND, -3, LEAD(x.DT) OVER(ORDER BY x.DT ASC)) AS NEW_END_DATE from @t as t outer apply ( select t.START_DATE, 1 union all select t.END_DATE, 2 ) as x(DT, [TYPE]) ) as y where y.NEW_END_DATE IS NOT NULL order by y.NEW_START_DATE A: This can be solved with simple joins and unions. However better with an ID. The common table expression is only to add an ID. declare @t table(START_DATE datetime,END_DATE datetime, GROSS_SALES_PRICE decimal(10,2)); insert into @t values ( '2014-08-06 00:00:00.000', '2014-10-06 23:59:59.000', 29.99), ( '2014-09-06 00:00:00.000', '2014-09-09 23:59:59.000', 32.99), ( '2014-09-10 00:00:00.000', '2014-09-30 23:59:59.000', 32.99), ( '2014-10-07 00:00:00.000', '2049-12-31 23:59:59.000', 34.99) ;with t_cte as (select row_number() over( order by start_date,end_date,GROSS_SALES_PRICE) ID,* from @t ) select t1.start_date,min(t2.start_date),t1.GROSS_SALES_PRICE from t_cte t1 join t_cte t2 on t1.END_DATE > t2.START_DATE and t1.END_DATE> t2.START_DATE and t1.id< t2.id group by t1.START_DATE,t1.END_DATE,t1.GROSS_SALES_PRICE union all select min(t2.start_date),t1.end_date,t1.GROSS_SALES_PRICE from t_cte t1 join t_cte t2 on t1.END_DATE > t2.START_DATE and t1.END_DATE> t2.START_DATE and t1.id< t2.id group by t1.START_DATE,t1.END_DATE,t1.GROSS_SALES_PRICE union all select t1.start_date,t1.END_DATE,t1.GROSS_SALES_PRICE from t_cte t1 left join t_cte t2 on t1.END_DATE > t2.START_DATE and t1.END_DATE> t2.START_DATE and t1.id< t2.id where t2.id is null order by 1,2,3
Q: Solve the PDE $(x-y)p+(x+y)q=2xz$. Using multipliers, $1,1,-\frac{1}{z}$, we get $x+y-\log z=c_1$. How do I get the second equation from Lagrange's auxiliary equation? $$\frac{dx}{x-y} = \frac{dy}{x+y} = \frac{dz}{2xz}$$ I think since the first two expressions only have $x$ and $y$ so they should be integrable. But it's still not so obvious for me. Can someone help? A: $$(x-y)z_x+(x+y)z_y=2xz$$ $$\frac{dx}{x-y} = \frac{dy}{x+y} = \frac{dz}{2xz}\quad\text{OK}$$ I agree with your first characteristic equation : $x+y-\log z=C_1$ , or on an equivalent form with $c_1=e^{-C_1}$ : $$z\,e^{-(x+y)}=c_1$$ A second characteristic equation comes from solving $\frac{dx}{x-y} = \frac{dy}{x+y}$ $\frac{dy}{dx}=\frac{x+y}{x-y} $ . This is an homogeneous ODE easy to solve. Hint : Let $y(x)=x\:u(x)$ . One get : $$\frac12\ln(x^2+y^2)-\tan^{-1}(\frac{y}{x})=c_2$$ The general solution of the PDE from the implicit form $c_1=F(c_2)$ is : $$z(x,y)=e^{x+y}F\left(\frac12\ln(x^2+y^2)-\tan^{-1}(\frac{y}{x}) \right)$$ $F$ is an arbitrary function (to be determined according to some boundary condition).
Q: Wordpress Custom Query Posts Showposts number Below is a script that I'm still unsure on how to get it to work, in Wordpress I have a repeater field that I can input the number of days that are in a month, so it creates calendar squares for me to highlight in a booking process. What I want to do, is to have the field 'how_many_days' to run a loop that will then repeat the number of divs calendarPost. So Ideally I can input separate number of loops. A live version of the output is here: http://universitycompare.com/school-bookings/ <?php if(get_field('calendar_repeater_field')): ?> <?php while(has_sub_field('calendar_repeater_field')): ?> <?php $numberoffields = get_sub_field('how_many_days'); ?> <?php $wp_query->query('showposts='.$numberoffields ); if (have_posts()) : while (have_posts()) : the_post(); ?> <div class="calendarPost"> <span class="title"><?php the_sub_field('calendar_month_name'); ?><span class="circleOpen"></span></span> </div> <?php endwhile; endif; wp_reset_query(); ?> <?php endwhile; ?> <?php endif; ?> FYI - I didn't know whether this would be a PHP related problem or WP only, so please advise if it this post should be elsewhere and I will remove and repost in the correct stackoverflow forum. A: Your question didn't completly explain if you were in fact trying to output posts so below is a couple of suggesstions. I'll start with what I think you're trying to do: If you're' just wanting to output the div.calendarPost over and over (based on the number of days) then you don't need a WordPress loop for that. A standard PHP for loop will do <?php if ( get_field('calendar_repeater_field' ) ) : ?> <?php while ( has_sub_field('calendar_repeater_field' ) ) : ?> <?php $numberoffields = get_sub_field('how_many_days'); ?> <?php for ( $i=0; $i < $numberoffields; $i++ ) { ?> <div class="calendarPost"> <span class="title"><?php the_sub_field('calendar_month_name'); ?><span class="circleOpen"></span></span> </div> <?php } ?> <?php endwhile; ?> <?php endif; ?> If however you're wanting to output posts (based on the number of days in the ACF field) then you would use the below code. <?php if ( get_field('calendar_repeater_field' ) ) : ?> <?php while ( has_sub_field('calendar_repeater_field' ) ) : ?> <?php $numberoffields = get_sub_field('how_many_days'); ?> <?php $calendar_posts = new WP_Query('posts_per_page=' . $numberoffields); ?> <?php if ( $calendar_posts->have_posts() ) : while ( $calendar_posts->have_posts() ) : $calendar_posts->the_post(); ?> <div class="calendarPost"> <span class="title"><?php the_sub_field('calendar_month_name'); ?><span class="circleOpen"></span></span> </div> <?php endwhile; wp_reset_postdata(); endif; ?> <?php endwhile; ?> <?php endif; ?> Refer to "The Usage" section of the WP Codex for more info: http://codex.wordpress.org/Class_Reference/WP_Query. Hope that helps.
Q: Asp.net Core MVC Dealing with validation scripts as a seperate file I was in the process of adding javascript files to site.js and the page was recognizing the scripts however I struck a wrinkle when I tried to move the validation scripts to site.js. Site.js appears above the validation scripts which are added to the page using @{await Html.RenderPartialAsync("_ValidationScriptsPartial");} Rather than having the validation scripts for the page being on the page (or is this how most people do it) how do I add say add a validation.js page etc that I might add below the above entry.. further, If I do add another page how do I minify it (add it to GULP) and add it in the production staging environment as well.. A: It is recommended that you should put the validation script rendering into a section as: @section Scripts { @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); } } and then refer it in the _Layout.cshtml at the bottom of the body as: @RenderSection("scripts", required: false) Then you will be able validate your forms and these JS files will appear only at that view in which you referred. If I do add another page how do I minify it (add it to GULP) and add it in the production staging environment as well.. You will have to minify your JS files once for all of your Views (pages) and they will work. A: Ideally you'd like all scripts to be loaded at the end of the page as scripts can block the rendering of your page and make it appears slower to the user. The way we achieve this is by using sections in our layout file. In your layout file you could add the following code: @rendersection("scripts", required: false) Then in your views you can add any specific scripts that they require: @section scripts { @Scripts.Render("~/Scripts/Validation.js") } Scott Gu has a good blog post that will explain sections a bit more in depth As for minification have you looked at using the built in bundling and minification tools that come with mvc nowadays? Here's another blog post this time by Rick Anderson that explains in all but essentially you can define different bundles which can be set to automatically bundle and minify the scripts for you. Defining a bundle is as simple as: bundles.Add(new ScriptBundle("~/bundles/jquery").Include( "~/Scripts/jquery-{version}.js")); And outputing them to your page is the same as above except instead of the link to your source file you use the bundle link you defined. So outputting the above would be done like so: @Scripts.Render("~/bundles/jquery")
Q: Understanding Microsoft Hotfix numbers I got the Microsoft Security Bulletin MS16-008: https://technet.microsoft.com/library/security/ms16-008 The title names a security update "3124605", so I expected find the hotfix KB3124605 on my system (Windows 8.1), but it's not installed. The bulletin also refer to the hotfix KB3121212, which actually is installed on my system. Why are there two different KB numbers for the same thing? https://support.microsoft.com/kb/3124605 https://support.microsoft.com/kb/3121212 A: Hotfixes are special-purpose fixes, to solve a specific problem. The security updates solve multiple problems and may include the same fix, but as part of solving a more general problem. For that generality, the latter are better-tested and distributed to a wider audience. The reason why there are two numbers, of course, is because the fixes are distributed separately, documented in two different places. Further reading: * *Windows Hotfixes and Updates - How do they work? *Description of the standard terminology that is used to describe Microsoft software updates *What's difference between Security Patch, HotFix and Service Pack? *Best Practices for Applying Service Packs, Hotfixes and Security Patches
Q: VSTS: Release Management Deploying Artifacts to IIS on Premise I am using VSTS Release management to deploy artifacts to IIS websites. I have several Web applications and web services to be deployed. So, i am trying to figure out what sort of tasks that best fits my situation. I have created a build definition with Visual Studio Build Task for projects as this one: which works fine but i need to add a task for copying the artifacts Under IIS Website Directory. The other approach is to use IIS web deployment as a task in Release definition, so I created the build definition as: However, it expects a Publish Profile (the build fails because it can't find it). I don't need to create a publish profile for each project in the application because this would be too much work. Is there is a workaround for that or what is preferred approach for this? A: You can update your build definition to generate a web deployment package and upload it to artifacts. And then in Release Management, add a task to run "projectname.deploy.cmd" in the deployment package to deploy it to your IIS server. Refer to this link for details: How to: Install a Deployment Package Using the deploy.cmd File Created by Visual Studio. And you can also enable FTP Publishing on your IIS server and add a task in your release to publish the artifacts via FTP. You may need this task: FTP Uploader. A: My Continuous Delivery with TFS / VSTS – Server Configuration and Application Deployment with Release Management blog post (with reference to some previous posts) has all the details you need for deploying your artefacts to target nodes using Windows Machine File Copy tasks then use PowerShell on Target Machines tasks to get them in to correct locations and to do token replacement and anything else that's required. I would recommend using PowerShell DSC so that IIS is properly configured before deployment but that's not required. Where possible for web apps I favour keeping things very simple by creating artefacts that contain all the web files that are needed for a particular folder and then just using plain xcopy for the deployment. A: If you need more control you can also use my MSDeploy VSTS extension to deploy a MSDeploy package https://marketplace.visualstudio.com/items?itemName=rschiefer.MSDeployAllTheThings https://dotnetcatch.com/2016/04/20/msdeployallthethings-vststfs-extension-is-public/
Q: Python Struggling to create an C extension wrapping a 3rd party dll We are trying to wrap a 3rd party dll (written in C) to access it through python. The dll has a .lib .c and a .h file with it. We are accessing the dll through the .c file. Outisde of the extension (running as a console application), the code works without an issue. without the 3rd party dll, the python extension works without an issue. The issue comes in when trying combine the third party dll with the python extension. Here is the distutil installation script ########## Setup.py ################################ from distutils.core import setup, Extension chemAppPython_mod = Extension('chemAppPython', sources = ['chemAppPython.c', 'cacint.c'], libraries=['ca_vc_opt_e'], depends = ['cacint.h']) setup(name = "chemAppPython", version = "1.0", description = "The ChemnApp Python module", ext_modules = [chemAppPython_mod], data_files = [('',['ca_vc_e.dll'])] ) #################################################### * *ca_vc_opt_e.lib and ca_vc_e.dll are the library containing the third party methods we want to access. *cacint.h and cacint.c are the files acting as an interface to the ca_vc_opt_e.lib and ca_vc_e.dll. *chemAppPython.c is file containing the code wrapping the calls to the cacint.c (and in effect, the third party dll), exposing the C code to Python. The errors we are receiving are: C:\Python33\source\Python-3.3.4\ChemAppPython>setup.py install running install running build running build_ext building 'chemAppPython' extension creating build creating build\temp.win-amd64-3.3 creating build\temp.win-amd64-3.3\Release C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Python33\include -IC:\Python33\include /TcchemAppPython.c /Fobuild\temp.win-amd64-3.3\Release\chemAppPython.obj chemAppPython.c C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Python33\include -IC:\Python33\include /Tccacint.c /Fobuild\temp.win-amd64-3.3\Release\cacint.obj cacint.c cacint.c(357) : warning C4267: 'function' : conversion from 'size_t' to 'long', possible loss of data cacint.c(390) : warning C4267: 'function' : conversion from 'size_t' to 'long', possible loss of data . . . (some more of the same warning message for different functions.) . cacint.c(619) : warning C4996: 'strcpy': This function or variable may be unsafe. Consider using strcpy_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. . . . . (some more of the same warning message at different positions in code.) . creating build\lib.win-amd64-3.3 C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python33\libs /LIBPATH:C:\Python33\PCbuild\amd64 ca_vc_opt_e.lib /EXPORT:PyInit_chemAppPython build\temp.win-amd64-3.3\Release\chemAppPython.obj build\temp.win-amd64-3.3\Rele chemAppPython.obj : warning LNK4197: export 'PyInit_chemAppPython' specified multiple times; using first specification Creating library build\temp.win-amd64-3.3\Release\chemAppPython.lib and object build\temp.win-amd64-3.3\Release\chemAppPython.exp cacint.obj : error LNK2019: unresolved external symbol TQINI referenced in function tqini cacint.obj : error LNK2019: unresolved external symbol TQOPEN referenced in function tqopen . . . (a lot more of them, for different methods. Again, it builds and runs fine in the console app host application.) . build\lib.win-amd64-3.3\chemAppPython.pyd : fatal error LNK1120: 74 unresolved externals error: command '"C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\link.exe"' failed with exit status 1120 * *I followed the Python extension tutorials for Windows from the python website. *I can successfully build the extension from Visual Studio 10.0 and make the extension run from a source code build of Python. *I am unable to make it work from the installed python (not the source code build) I copied the created .pyd file to the site-package folder and received an error when I tried to import the extension from the python console. A: I solved it. Apparently 64bit Python doesn't mingle well (or at all) with 32bit dll's. I downgraded python to a 32bit version and everything just worked.
Q: Google Maps API - Onscreen Keyboard "Go!" I'm using the Google maps API to place a map onto a web page. As the Computer with the map has not a keyboard, I simulate a javascript "onscreen" keyboard. My problem is that I don't know how to generate the "Enter" key. My map has a searchbox (libraries=places), where the user input his choice. On a normal computer I press enter to "search", how can I do that with my onscreen keyboard? Thank you!
Q: Unexpected type of NER data when trying to train spacy ner pipe to add new named entity I'm trying to add a new named entity to spacy but I couldn't have good examples of Example objects for ner training and I'm getting a value error. Here is my code: import spacy from spacy.util import minibatch, compounding from pathlib import Path from spacy.training import Example nlp=spacy.load('en_core_web_lg') ner=nlp.get_pipe("ner") TRAIN_DATA=[('ABC is a worldwide organization',{'entities':[0,2,'CRORG']}), ('we stand with ABC',{'entities':[24,26,'CRORG']}), ('we supports ABC',{'entities':[15,17,'CRORG']})] ner.add_label('CRORG') # Disable pipeline components that dont need to change pipe_exceptions = ["ner"] unaffected_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions] with nlp.disable_pipes(*unaffected_pipes): for iteration in range(30): random.shuffle(TRAIN_DATA) for raw_text,entity_offsets in TRAIN_DATA: doc=nlp.make_doc(raw_text) nlp.update([Example.from_dict(doc,entity_offsets)]) A: The 'entitites' in TRAIN_DATA are supposed to be a list of tuples. They have to be 2D, not just 1D. So instead of: TRAIN_DATA=[('ABC is a worldwide organization',{'entities':[0,2,'CRORG']}), ('we stand with ABC',{'entities':[24,26,'CRORG']}), ('we supports ABC',{'entities':[15,17,'CRORG']})] Use: TRAIN_DATA=[('ABC is a worldwide organization',{'entities':[(0,2,'CRORG')]}), ('we stand with ABC',{'entities':[(24,26,'CRORG')]}), ('we supports ABC',{'entities':[(15,17,'CRORG')]})]
Q: Standard error of individual groups in a multinomial distribution Let's say I have a sample of data from a multinomial distribution with 10 categories. I want to determine the error in my estimate of the % of observations that fall into each of the categories. Is it fair to compute the error for each category as a binomial (category vs all other categories)? It seems like it's not since the group outcomes depend on each other. E.g. an observation that falls into group 1 cannot fall into any of the other groups. If this method isn't fair, what options are there? As an extension, what if the categories are bins of a continuous range? Does anything change? So for example let's say we measure height, then group observations into 0.5 foot bins, and then we 'lose' the original data. A: Yes, it is fair. In the binomial distribution, the number of events in the heads/success state $n_H$ is not independent of the number of events in the tails/failure state $n_T=N-n_H$. If the events that fall into the tails state are then subsequently subjected to another binomial distribution, say "left" and "right", then the same logic holds, and so on. Using this divide-and-sub-divide method, you can build out the entire multinomial distribution from a set of nested binomials. Perhaps that gives you confidence that treating any one of them as a binomial is OK. For a multinomial distribution, the joint probability of finding $(n_1,\dots,n_k), $ events in the $k$ bins is $$ P(n_1,\dots,n_k)=\binom{N}{n_1,\dots,n_k} p_1^{n_i} \dots p_k^{n_k} $$ The expected number of events in the $i$th bin is $E(n_i)=N p_i$. The variance of events in the $i$th bin is $var(n_i)=N p_i(1-p_i).$ You'll notice these are exactly the same as the multinomial's binomial counterpart. You may add the constraints $N=\sum n_i$ and $1=\sum p_i$ but it is implied in both binomial and multinomial forms of the problem. If there are enough events in the bin (generally a minimum of 10 plus another 10 events in all of the other bins combined), you may invoke the normal approximation on the distribution of events and calculate a standard error $se(n_i)= \sqrt{p_i(1-p_i)/N}$ which you can then use to give a normal confidence interval around the mean. Again, just as with the binomial distribution. If there aren't, then you would have to asses the 95% confidence interval manually, by building the binomial distribution for $n_i=0,1,2\dots$ for each bin. For the arbitrary binning of continuous variables, nothing changes in the basic multinomial analysis. However, any generalization you wish to make about the $p_i$ would be contingent on the exact set of bins you choose. It is awkward to report the a probability with a full list of bin definitions, so practically speaking, you should have a bin strategy that you can report in only a few words -- "unit intervals centered on the integers," for instance, or "intervals of size 2 centered on the odd integers." See also Confidence interval and sample size multinomial probabilities
Q: How to combine intersection observer with parallax? The problem with this parallax is that it runs on all sections of the .parallaxBg class page at the same time. Therefore, I would like to use the Intersection Observer to run parallax only when the section enters the viewport. // Parallax ------------------------------------------------------------------------- window.addEventListener("scroll", function parallaxFunction() { let bg = document.querySelectorAll(".parallaxBg"); let distance = window.pageYOffset; bg.forEach(parallaxBg => { parallaxBg.style.top = distance * -0.2 + "px"; }) }); // Intersection observer -------------------------------------------------------------- const bgImages = document.querySelectorAll('.parallaxBg'); observer = new IntersectionObserver((entries) => { entries.forEach(entry => { console.log(entries); if (entry.intersectionRatio > 0) { entry.target.parallaxFunction; } else { entry.target.parallaxFunction; } }); }); bgImages.forEach(image => { observer.observe(image); }); A: Yo. Bit late to the party but I needed to figure this out today. Three essential parts to this: * *Intersection Observer tracks when the parallax class is in the viewport. *Scroll listener gets triggered when the parallax class is in the viewport, and is removed when the parallax class leaves the viewport. *While the scroll listener is active, we update the transform based on the scroll position I'm running this through a debounce function to make it slightly smoother, but this is optional. I think the top position can be calculated better, and I'd be interested to know how we can optimise this, but this answers your question of how to only transform the element when it is in the viewport - you can look in the console and check the dynamically added css inline on the img tag - ONLY when in the viewport :) document.addEventListener('DOMContentLoaded', function () { var parallaxImages =[].slice.call( document.querySelectorAll(".parallax img") ) console.log(parallaxImages); if ("IntersectionObserver" in window && 'IntersectionObserverEntry' in window) { // Intersection Observer Configuration const observerOptions = { root: null, rootMargin: '0px 0px', // important: needs units on all values threshold: 0 }; var observer = new IntersectionObserver(handleIntersect, observerOptions); var el; function handleIntersect(entries, observer) { entries.forEach(function(entry) { if (entry.isIntersecting) { el = entry.target; window.addEventListener('scroll', parallax, false); } else { window.removeEventListener('scroll', parallax, false); } }); } parallaxImages.forEach(function(parallaxImage) { observer.observe(parallaxImage); }); var parallax = debounce(function() { amount = Math.round( window.pageYOffset * 0.2 ); el.style.webkitTransform = 'translateY(-'+amount+'px)'; }, 10); } }, false); /************************************* Function: Debounce *************************************/ function debounce(func, wait, immediate) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if (!immediate) func.apply(context, args); }; var callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; /* PARALLAX STYLING */ .parallax { height: 40vh; overflow: hidden; } .parallax img { width: 100%; } /* NICE TO HAVE */ body { margin: 0; padding: 0; font-family: arial; } h1 { margin: 0; color: white; } header { height: 110vh; background-color: steelblue; display: flex; justify-content: center; align-items: center; } section.spacer { height: 110vh; background-color: seagreen; display: flex; justify-content: center; align-items: center; } footer { height: 50vh; background-color: firebrick; } <header> <h1>Scroll down!</h1> </header> <main> <section class="parallax"> <img src="https://picsum.photos/1920/1920" alt=""> </section> <section class="spacer"> <h1>Keep scrolling!</h1> </section> <section class="parallax"> <img src="https://picsum.photos/1920/1920" alt=""> </section> </main> <footer></footer>
Q: Flask Security- check what Roles a User has I was looking at the flask-security api and i dont see any function that returns the list of roles a specific User has. Is there anyway to return a list of Roles a user has? A: If you look at how the has_role(...) method has been defined, it simply iterates through self.roles. So the roles attribute in the user is a list of Role objects. You need to define your User and Role models as in the example here, so that the User model has a many-to-many relationship to Role model set in the User.roles attribute. # This one is a list of Role objects roles = user.roles # This one is a list of Role names role_names = (role.name for role in user.roles)
Q: Can someone access my Firebase database if he/she does not have the URL? Firebase is a wonderful backend service with strong security rules. In I/O 2021, they also introduced Firebase App Check that adds an additional layer of security. But even if I set the read/write permissions as true and do not enforce app check, can anyone access my database without knowing the URL? If no, then what is the best way to completely hide the URL in Android Studio? A: To access the Firebase Realtime Database you must know its URL. But that also means that any application that needs to access the database, must know that URL in it - in the case of Android applications, typically this comes from the google-services.json file. And if that URL is present in your application binary, that means that a malicious user can find it, and use it to access your database. So: yes, you need to know the URL of the database to access it, but unfortunately you're sending that URL to all users of your app (since that needs the URL too).
Q: How long is a "tick" in FreeRTOS? For the functions xTaskGetTickCount() and xTaskGetTickCountFromISR(), the FreeRTOS documentation doesn't give any indication of what a "tick" is, or how long it is, or any links to where to find out. Returns: The count of ticks since vTaskStartScheduler was called. What is a "tick" in FreeRTOS? How long is it? A: First found the answer in an archived thread at FreeRTOS forums: The tick frequency is set by configTICK_RATE_HZ in FreeRTOSConfig.h. FreeRTOSConfig.h settings are described here: http://www.freertos.org/a00110.html If you set configTICK_RATE_HZ to 1000 (1KHz), then a tick is 1ms (one one thousandth of a second). If you set configTICK_RATE_HZ to 100 (100Hz), then a tick is 10ms (one one hundredth of a second). Etc. And from the linked FreeRTOS doc: configTICK_RATE_HZ The frequency of the RTOS tick interrupt. The tick interrupt is used to measure time. Therefore a higher tick frequency means time can be measured to a higher resolution. However, a high tick frequency also means that the RTOS kernel will use more CPU time so be less efficient. The RTOS demo applications all use a tick rate of 1000Hz. This is used to test the RTOS kernel and is higher than would normally be required. More than one task can share the same priority. The RTOS scheduler will share processor time between tasks of the same priority by switching between the tasks during each RTOS tick. A high tick rate frequency will therefore also have the effect of reducing the 'time slice' given to each task.
Q: How to capture location coordinates without having GPS receiver Chip? I am trying to capture location coordinates for the device which does not have gps receiver chip(or for the device whose gps receiver chip is damaged). Does android provide any API to achieve this. Does A-GPS also refers the same location api? A: Use NETWORK PROVIDER instead of GPS for some options
Q: Insert into table variable using result set from a UDF I have a UDF which takes a comma separated list and turns it into rows So the output of select * from fnDrugSplit('one,two,three',',') would be one two three When I try to insert these results into a table variable with declare @drugName1 table(drugName1 varchar(50),drugName2 varchar(50)) insert into @drugName1(drugName1,drugName2) values( (select * from fnDrugSplit('one,two,three',',') ,(select * from fnDrugSplit('one,two,three',',') ) I get Incorrect syntax near ')', the last parentheses closing out the values block. The function will is deterministic and I don't know why I'm getting this error because declare @drugName1 table(drugName1 varchar(50),drugName2 varchar(50)) insert into @drugName1(drugName1,drugName2) values( (select 'one') ,(select 'two') ) select * from @drugName1 works fine. What am I missing here? The second parameter in the function is the delimiter for rows. SQL Server 2008 A: Your udf returns a table with 3 rows. You can't put 3 rows of 1 column into the table with the "VALUES" clause. The "VALUES" clause expects scalars, which is why "Select 'one'" and "Select 'two'" work. You don't need "VALUES" you can just articulate your select. Insert into @drugName1 (drugName1, drugName2) select fn.ColName, fn.ColName from fnDrugSplit('one,two,three',',') fn Not sure how you want to put 3 values into 2 columns, that wasn't clear in your question. Also, I don't know what the column name is for your UDF, so I assumed ColName.
Q: Codeigniter: INSERT INTO validation I am new to Codeigniter and so I am having difficulty understanding how to validate my form. Whenever I hit submit or refresh the page, empty values are added to the database. Is there a way to avoid this by either asking for validation or not submitting anything if the values are empty? All the examples I find use HTML forms and if I hit NULL no on the database it brings up an error. Controller_ca <?php defined('BASEPATH') OR exit('No direct script access allowed'); class Controller_ca extends CI_Controller { public function __construct() { parent::__construct(); } public function index() { $this->load->model('Model_ca'); $result = $this->Model_ca->insert_chipper(); } protected function form_validation() { $this->load->library('form_validation'); $this->form_validation->set_rules('name','Name' 'required'); $this->form_validation->set_rules('location', 'Location' 'required | alpha',); $this->form_validation->set_rules('description', 'Description' 'required'); if ($this->form_validation->run() == FALSE) { //true $this->load->view('Model_ca'); } else { $this->index(); } } } ?> Model_ca <?php class Model_ca extends CI_Model { function __construct() { parent::__construct(); } function insert_chipper() { $name = $this->input->post('name'); $location = $this->input->post('location'); $description = $this->input->post('description'); $sql = "INSERT INTO chipper_reviews (name, location, description) VALUES (". $this->db->escape($name).", ". $this->db->escape($location).", ". $this->db->escape($description).")"; $result = $this->db->query($sql); } } ?> View_ca echo "<h1>Chip Advisor</h1><br/>"; $this->load->helper('form'); echo validation_errors(); echo form_fieldset('Add Chipper'); echo form_open(''); echo "Name:" . form_input('name'); echo "Location:" . form_input('location'); echo "Description:" . form_input('description'); echo form_submit('mysubmit', 'Submit Post!'); echo form_fieldset_close(); ?>
Q: Read a txt file JSON data to publish the messages in Cloud Pub Sub I am trying to publish data to Cloud Pub Sub. Data is in JSON format and is being kept in my local folder. I am not using Cloud Storage and trying to read the pubsub message directly through cloud function. Tested the flow with manually passing messages and the data is getting inserted into Bigquery tables also. Only place i got stuck is, how will i pass a .txt file JSON dataset to Cloud PubSub, Sample data {"ID":6,"NAME":"Komal","AGE":22,"ADDRESS":"Assam","SALARY":20000} Can any one pls give me a hint! I could see various options using cloud storage and all, here i am reading the changed data from DB table, insert those records into 1 dummy table and converting the data from that table to JSON format and writing to a .txt file. From here if i could publish the data to pubsub, entire flow will get completed If i manually pass like below, the data will get inserted gcloud pubsub topics publish pubsubtopic1 --message {"ID":6,"NAME":"Komal","AGE":22,"ADDRESS":"Assam","SALARY":20000} Edit on APRIL 10th Some how i could achieve the data insert from a .txt file to pubsub using a batch file. But when i call the batch file from PL SQL procedure (DBMS_SCHEDULER), it is throwing error "'gcloud' is not recognized as an internal or external command". But when i call the batch file from the command line, data is getting psuhed to pub sub and to Bigquery table as well.PFB script i am using and the PL SQL code as well. Any help will be really appreciated Batch script & PL SQL code used to call the script @echo off set file=C:\temp\TEST_EXTRACT.txt echo %file% >> C:\temp\T1.txt for /f "tokens=*" %%A in (%file%) do (ECHO %%A >> C:\temp\T2.txt ECHO cmd.exe /K cd C:\Users\test\AppData\Local\Google\Cloud SDK && gcloud pubsub topics publish pubsubtopic1 --message %%A > C:\temp\T3.txt) Below mentioned the PL SQL code which is used for calling the batch file BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB( job_name => 'LOOP_JOB', job_type => 'EXECUTABLE', job_action => 'C:\WINDOWS\system32\cmd.exe', --repeat_interval => 'FREQ=WEEKLY;BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=18;BYMINUTE=0;BYSECOND=0', --start_date => SYSTIMESTAMP at time zone 'EUROPE/LONDON', job_class => 'DEFAULT_JOB_CLASS', comments => 'Job to test call out to batch script on Windows', auto_drop => FALSE, number_of_arguments => 3, enabled => FALSE); SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE( job_name => 'LOOP_JOB', argument_position => 1, argument_value => '/q'); SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE( job_name => 'LOOP_JOB', argument_position => 2, argument_value => '/c'); SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE( job_name => 'LOOP_JOB', argument_position => 3, argument_value => 'C:\temp\loop.bat'); SYS.DBMS_SCHEDULER.ENABLE( 'LOOP_JOB' ); END; / A: If you want to easily publish the contents of a single file: gcloud pubsub topics publish ${PUBSUB_TOPIC_NAME} --message "$(cat ${FILE_NAME} | jq -c)" A: The issue with your bash script is likely that the gcloud command line tool is not installed on the machine the database is actually running on or is not in the PATH for the environment running the script, so it is not found when your .bat script is run. That being said, I would highly advise against trying to do data processing in a .bat script and passing it to the command line tool, as it will be highly error prone, and have a large overhead both from going through the inefficient JSON encoding, as well as bringing up and tearing down a publisher client for every message. Instead, I would suggest that you look at exporting the data in CSV format instead, and using one of the client libraries to read this file and publish to Cloud Pub/Sub. That can still be triggered from the database cron job as you mentioned below, and will be much more efficient as well as more testable. A: If the Json data that you've got in your file is an array, then you can publish to the topic every entry of that array with the following command: jq -c ".[]" json_array.json | xargs -t -I {} gcloud pubsub topics publish yourTopic --message {} Make sure you have jq installed, while xargs is more common.
Q: POST запрос в андроид приложении Реализую регистрацию, но не имею понятия как это прописать. С post запросами столкнулся впервые, до этого писал для get. Есть мысли, что нужно сначала передать строки в json и только потом на сервер. Подправьте меня, если ошибаюсь. Решил попробовать библиотеку Retrofit, но приложение упало. Вот код private static final Gson GSON = new GsonBuilder().setPrettyPrinting().create(); private static final String TAG = "this"; private final String baseUrl = "http://u1938.blue.elastictech.org/api/users"; private Gson gson = new GsonBuilder().create(); private Retrofit retrofit = new Retrofit.Builder() .addConverterFactory(GsonConverterFactory.create(gson)) .baseUrl(baseUrl) .build(); private Link parse = (Link) retrofit.create(List.class); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.registration_activity); Intent intent = getIntent(); intent.getExtras(); buttonRegistration2.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if (editPassword != editPassword2) { Toast.makeText(RegistrationActivity.this, "Пароли не совпадают. Повторите попытку", Toast.LENGTH_SHORT).show(); } else { Map<String, String> mapJson = new HashMap<String, String>(); mapJson.put("email", editEmail.getText().toString()); mapJson.put("name", editUserName.getText().toString()); mapJson.put("password", editPassword.getText().toString()); mapJson.put("contact_number", editNumber.getText().toString()); Call<Object> call = parse.parseMethod(mapJson); try { Response<Object> response = call.execute(); Map<String, String> map = gson.fromJson(response.body().toString(), Map.class); for (Map.Entry e : map.entrySet()) { System.out.println(e.getKey() + "" + e.getValue()); Log.e(TAG, "Object"); } } catch (IOException e) { e.printStackTrace(); } } } }); } Интерфейс interface Link { @FormUrlEncoded @POST("/users") Call<Object> parseMethod(@FieldMap Map<String, String> map); } Ошибка java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.example.idrisov.mypost/com.example.idrisov.mypost.RegistrationActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'android.view.Window$Callback android.view.Window.getCallback()' on a null object reference at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2849) A: Рекомендую посмотреть про POST набрав в интернете "java POST". Видеоурок по JSON: https://www.youtube.com/watch?v=4Almffj2Gms Еще тебе понадобится установка сторонних библиотек. Точно не помню как, но зайдя в Gradle Script > build.gradle(Module:app) добавь библиотеку можно с помощью (моя "внешняя библиотека" находится на уровне корня проекта. Поэтому путь к файлу будет разный): implementation files('../../../библиотека.jar') Вот пример программы на которой я учился (может есть и лучше варианты): import java.io.BufferedReader; import java.io.DataOutputStream; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import javax.net.ssl.HttpsURLConnection; public class HttpURLConnectionExample { private final String USER_AGENT = "Mozilla/5.0"; public static void main(String[] args) throws Exception { HttpURLConnectionExample http = new HttpURLConnectionExample(); System.out.println("Testing 1 - Send Http GET request"); http.sendGet(); System.out.println("\nTesting 2 - Send Http POST request"); http.sendPost(); } // HTTP GET request private void sendGet() throws Exception { String url = "https://api.kraken.com/0/public/Trades?pair=XXBTZEUR&since=0"; URL obj = new URL(url); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); // optional default is GET con.setRequestMethod("GET"); //add request header con.setRequestProperty("User-Agent", USER_AGENT); int responseCode = con.getResponseCode(); System.out.println("\nSending 'GET' request to URL : " + url); System.out.println("Response Code : " + responseCode); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); //print result System.out.println(response.toString()); } // HTTP POST request private void sendPost() throws Exception { String url = "https://api.kraken.com/0/public/Ticker?pair=XBTCZUSD"; URL obj = new URL(url); HttpsURLConnection con = (HttpsURLConnection) obj.openConnection(); //add request header con.setRequestMethod("POST"); con.setRequestProperty("User-Agent", USER_AGENT); con.setRequestProperty("Accept-Language", "en-US,en;q=0.5"); String urlParameters = "{\"pair\":{\"XBTEUR\"}"; // Send post request con.setDoOutput(true); DataOutputStream wr = new DataOutputStream(con.getOutputStream()); wr.writeBytes(urlParameters); wr.flush(); wr.close(); int responseCode = con.getResponseCode(); System.out.println("\nSending 'POST' request to URL : " + url); System.out.println("Post parameters : " + urlParameters); System.out.println("Response Code : " + responseCode); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); //print result System.out.println(response.toString()); } } A: http://square.github.io/retrofit/ попробуйте использовать эту библиотеку пример интерфейса @POST("users/new") Call<User> createUser(@Body User user);
Q: How to make bootsrap 3.0 navbar brand a dropdown? Ok, im making a website and i really want to have a navbar with a drop down. So it will be like | BRAND - Link Link Link. When you click BRAND it will drop a dropdown that will have links, similar to just a link dropdown. <html> <style> .navbar .divider-vertical{ height:50px; border-left: 1px solid rgb(242, 242, 242); /*Feel free to change left color or width!*/ border-right: 1px solid rgb(255, 255, 255);/*Feel free to change right color or width!*/ } .navbar-brand { margin-left: 150px; /* This value could be different for another layout */ } #brand-Dropdown { margin-left: 150px; top: 48px; } </style> <body> <nav class="navbar navbar-default navbar-fixed-top" role="navigation"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a href="#" class="navbar-brand dropdown" data-toggle="dropdown">Gippix Servers <b class="caret"></b></a> <ul id = "brand-Dropdown" class="dropdown-menu dropdown-toggle navbar-collapse"> <li><a href="#">MrBumtart</a></li> <li><a href="#">Xyrize</a></li> <li class="divider"></li> <li><a href="#">Help</a></li> </ul> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav"> <li class="active"><a href="#">Link</a></li> <li><a href="#">Link</a></li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Dropdown <b class="caret"></b></a> <ul class="dropdown-menu"> <li><a href="#">Action</a></li> <li><a href="#">Another action</a></li> <li><a href="#">Something else here</a></li> <li class="divider"></li> <li><a href="#">Separated link</a></li> <li class="divider"></li> <li><a href="#">One more separated link</a></li> </ul> </li> </ul> <form class="navbar-form navbar-left" role="search"> <div class="form-group"> <input type="text" class="form-control" placeholder="Search"> </div> <button type="submit" class="btn btn-default">Submit</button> </form> <ul class="nav navbar-nav navbar-right"> <li><a href="#">Link</a></li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Dropdown <b class="caret"></b></a> <ul class="dropdown-menu"> <li><a href="#">Action</a></li> <li><a href="#">Another action</a></li> <li><a href="#">Something else here</a></li> <li class="divider"></li> <li><a href="#">Separated link</a></li> </ul> </li> </ul> </div><!-- /.navbar-collapse --> </div><!-- /.container-fluid --> </nav> <a class = "navbar-btn btn btn-info pull-right" href = "/donate"><b>Donate<b></a> </body> A: Ok I found it for you : Code snippet copied from getbootstrap.com and modified by me to implement your idea: <nav class="navbar navbar-default" role="navigation"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a href="#" class="navbar-brand dropdown" data-toggle="dropdown">Dropdown <b class="caret"></b></a> <ul class="dropdown-menu"> <li><a href="#">Action</a></li> <li><a href="#">Another action</a></li> <li><a href="#">Something else here</a></li> <li class="divider"></li> <li><a href="#">Separated link</a></li> <li class="divider"></li> <li><a href="#">One more separated link</a></li> </ul> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav"> <li class="active"><a href="#">Link</a></li> <li><a href="#">Link</a></li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Dropdown <b class="caret"></b></a> <ul class="dropdown-menu"> <li><a href="#">Action</a></li> <li><a href="#">Another action</a></li> <li><a href="#">Something else here</a></li> <li class="divider"></li> <li><a href="#">Separated link</a></li> <li class="divider"></li> <li><a href="#">One more separated link</a></li> </ul> </li> </ul> <form class="navbar-form navbar-left" role="search"> <div class="form-group"> <input type="text" class="form-control" placeholder="Search"> </div> <button type="submit" class="btn btn-default">Submit</button> </form> <ul class="nav navbar-nav navbar-right"> <li><a href="#">Link</a></li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Dropdown <b class="caret"></b></a> <ul class="dropdown-menu"> <li><a href="#">Action</a></li> <li><a href="#">Another action</a></li> <li><a href="#">Something else here</a></li> <li class="divider"></li> <li><a href="#">Separated link</a></li> </ul> </li> </ul> </div><!-- /.navbar-collapse --> </div><!-- /.container-fluid --> </nav>
Q: Issue with DB Connectivity via SpotFire on a 64 bit machine I am trying to establish DB connection to an ORACLE 10g DB. I have pasted the error information below. I am looking for some information on the issue. System configuration: * *Windows XP SP2 *ARCH: AMD 64 bit *TIBCOE SpotFire 64 bit Error message: Could not open data source. TargetInvocationException at Spotfire.Dxp.Framework: Exception has been thrown by the target of an invocation. (HRESULT: 80131604) Stack Trace: at Spotfire.Dxp.Framework.ApplicationModel.ProgressService.ExecuteWithProgress(String title, String description, ProgressOperation operation) at Spotfire.Dxp.Forms.Data.DataFormsUserActions.OpenData(DataSource dataSource, String progressOperationTitle, String progressOperationDescription) InvalidOperationException at System.Data.OracleClient: Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed. (HRESULT: 80131509) Stack Trace: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Spotfire.Dxp.Data.Import.DatabaseDataSource.<>c__DisplayClass4.<GetPromptModels>b__0() at Spotfire.Dxp.Framework.ApplicationModel.Progress.ExecuteSubtask(String title, ProgressOperation operation) at Spotfire.Dxp.Data.Import.DatabaseDataSource.<GetPromptModels>d__6.MoveNext() at Spotfire.Dxp.Data.DataSourceConnection.<GetPromptModels>d__2.MoveNext() at Spotfire.Dxp.Data.DataSource.Connect(IServiceProvider serviceProvider, DataSourcePromptMode promptMode, Boolean updateInternalState) at Spotfire.Dxp.Forms.Data.Import.DataSourceFactoryService.OpenDataSource(DataSource dataSource, IServiceProvider serviceProvider) at Spotfire.Dxp.Forms.Application.FormsProgressService.ProgressThread.DoOperationLoop() BadImageFormatException at System.Data.OracleClient: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) (HRESULT: 8007000B) Stack Trace: at System.Data.Common.UnsafeNativeMethods.OCILobCopy2(IntPtr svchp, IntPtr errhp, IntPtr dst_locp, IntPtr src_locp, UInt64 amount, UInt64 dst_offset, UInt64 src_offset) at System.Data.OracleClient.OCI.DetermineClientVersion() A: The Key error here is: BadImageFormatException at System.Data.OracleClient: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) (HRESULT: 8007000B) The BadImageFormatException can occur if you are trying to load a 32bit DLL into a 64bit app, or vice versa. From the sounds of it, either: * *You don't have a 64bit Oracle Client Driver installed, or *The connection string is trying to load a 32bit Oracle Client Driver, or *the client driver is actually corrupt. A: It seems there is a way to force Spotfire to run in 32 bit mode, https://tibbr.tibcommunity.com/tibbr/#!/messages/66091 but I cannot figure out how. Does anyone know how to force spotfire to run in 32 bit mode?
Q: How can I build multi layout column with flex? I want to build a multi-layout column with flex-items where each column contains a dynamic list, however, each component or item will contain 2 or more extra items where it would be expanded on hover, but how can I build the following layout to achieve the desired UI? I have done some of the components where on hover expands to extra items, but it acts as a row instead of column. Desired: Layout: .flex-container { padding: 0; margin: 0; list-style: none; max-height: 600px; display: -webkit-box; display: -moz-box; display: -ms-flexbox; display: -webkit-flex; display: flex; -webkit-flex-flow: column wrap; justify-content: space-around; } .flex-item { flex-direction: column; } .ex-col { width: 100%; max-width: 420px; background: white; padding: 20px; color: #555; } .ex-col h2 { font-size: 1.1em; line-height: 1.3em; text-align: center; font-weight: bold; margin: 0 0 10px; padding: 0 0 10px; border-bottom: 1px solid slategray; width: 100%; font-weight: bold; } .ex-col h2>small { display: block; font-size: .9rem; line-height: 1.3rem; font-weight: normal; } .ex-list { list-style: none; padding: 0; margin: 0; display: grid; grid-gap: 10px; } /* The list item element. Must be FLEX */ .ex-list>li { display: flex; align-items: center; min-height: 70px; } /* The list heading element */ .ex-list h3 { font-size: 1em; line-height: 1.4em; padding: 6px 8px; border: 1px solid slategray; display: flex; align-items: center; /* Set flex size to 50% of the parent element width. This is a good way to make sure it is always 50% */ flex-basis: 100%; max-width: 50%; cursor: pointer; margin: 0; } .ex-list h3>small { font-size: .6em; line-height: 1em; color: lightgray; margin: 0 0 0 .5em; } /* The sub-menu element. Initial state is display:none */ .ex-list-sub { display: none; list-style: none; padding: 0 0 0 20px; margin: 0; position: relative; flex-basis: 100%; max-width: 50%; transition: opacity .4s ease-out; border: 1px solid gray; border-left: none; } .ex-list-sub:before { content: ""; display: block; position: absolute; top: 0; bottom: 0; left: 0; width: 20px; background: gray; } .ex-list-sub>li:not(:first-child) { border-top: 1px solid gray; } .ex-list-sub>li>a { display: block; padding: 6px 8px; color: inherit; text-decoration: none; transition: background .2s; } .ex-list-sub>li>a:hover { background: #efefef; } /* THE HOVER ACTION */ /* Set the hover on the parent element. Has to be the parent because otherwise the pop-up would disappear when you hover over it */ .ex-list>li:hover .ex-list-sub { display: block; } <div class="flex-container"> <div class="flex-item"> <div class="ex-col"> <h2> Reconnaissance <small>10 Techniques</small> </h2> <ul class="ex-list"> <li> <h3> Active Scanning <small>(0/2)</small> </h3> <ul class="ex-list-sub"> <li> <a href="#"> Scanning IP Blocks </a> </li> <li> <a href="#"> Vulnerability Scanning </a> </li> </ul> </li> </ul> </div> <div class="ex-col"> <h2> Reconnaissance <small>10 Techniques</small> </h2> <ul class="ex-list"> <li> <h3> Active Scanning <small>(0/2)</small> </h3> <ul class="ex-list-sub"> <li> <a href="#"> Scanning IP Blocks </a> </li> <li> <a href="#"> Vulnerability Scanning </a> </li> </ul> </li> </ul> </div> </div> </div> A: Please try this code, To How can I build a multi-layout column with flex? html { height: 100%; } body { height: 100%; display: flex; flex-flow: column nowrap; } h1 { padding: 1em; } #content { padding: 10px; background-color: #eee; display: flex; flex-grow: 1; } #content > .group { margin: 10px; padding: 10px; border: 1px solid #cfcfcf; background-color: #ddd; flex: 1 1 auto; } #content > .group:first-child { columns: 10em; flex-grow: 2; } #content > .group .item { margin: 10px; padding: 10px; background-color: #aaa; break-inside: avoid; } #content > .group .item:first-child { margin-top: 0; } <h1>Page Title</h1> <div id="content"> <div class="group"> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> </div> <div class="group"> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> </div> <div class="group"> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> <div class="item">Item</div> </div> </div> I hope this code will be useful to you. Thank you. A: You need to add display: flex; to your flex-item class. And you should probably rename it as it will be out of sync with reality. Here is a codepen with the example
Q: how to extract tables from pdf using camelot? I want to extract all tables from pdf using camelot in python 3. import camelot # PDF file to extract tables from file = "./pdf_file/ooo.pdf" tables = camelot.read_pdf(file) # number of tables extracted print("Total tables extracted:", tables.n) # print the first table as Pandas DataFrame print(tables[0].df) # export individually tables[0].to_csv("./pdf_file/ooo.csv") and then I get only 1 table from the 1st page of the pdf. how extract the whole tables from the pdf file?? A: tables = camelot.read_pdf(file, pages='1-end') If pages parameter is not specified, Camelot analyzes only the first page. For better explanation, see official documentation. A: In order to extract pdf tables with camelot you have to use the following code. You have to use stream parameter because it is very powerful in order to detect almost all the pdf tables. Also if you have problem with the extraction you have to add as a parameter the row_tol and edge_tol parameters.For example row_tol = 0 and edge_tol=500. pdf_archive = camelot.read_pdf(file_path, pages="all", flavor="stream") for page, pdf_table in enumerate(pdf_archive): print(pdf_archive[page].df)
Q: Expanding is working perfectly but if a link is there it is also clicking by default and then expanding which should not be the case var tree = document.querySelectorAll('ul.tree a:not(:last-child)'); for(var i = 0; i < tree.length; i++){ tree[i].addEventListener('click', function(e) { var parent = e.target.parentElement; var classList = parent.classList; if(classList.contains("open")) { classList.remove('open'); var opensubs = parent.querySelectorAll(':scope .open'); for(var i = 0; i < opensubs.length; i++){ opensubs[i].classList.remove('open'); } } else { classList.add('open'); } }); } ul.tree li { list-style-type: none; position: relative; } ul.tree li ul { display: none; } ul.tree li.open > ul { display: block; } ul.tree li a { color: black; text-decoration: none; } ul.tree li a:before { height: 1em; padding:0 .1em; font-size: .8em; display: block; position: absolute; left: -1.3em; top: .2em; } ul.tree li > a:not(:last-child):before { content: '+'; } ul.tree li.open > a:not(:last-child):before { content: '-'; } <ul class="tree"> <li><a href='https://www.javascript.com' target="_blank">JavaScript</a> <ul> <li><a href='https://www.javascript.com' target="_blank">JavaScript Source</a></li> <li><a href='https://www.javascript.com' target="_blank">WebReference JavaScript Articles</a></li> <li><a href='https://www.javascript.com' target="_blank">JavaScript.com</a></li> </ul> </li> <li><a href='https://www.webdevelopment.com' target="_blank">Web Development</a> <ul> <li><a href='https://www.webreference.com' target="_blank">Web Reference</a></li> <li><a href='https://www.webdeveloper.com' target="_blank">Web Developer</a></li> <li><a href='https://www.wdvl.com' target="_blank">WDVL</a></li> </ul> </li> <li><a href='https://www.forums.com' target="_blank">Forums</a> <ul> <li><a href='https://www.xml.com' target="_blank">XML</a></li> <li><a href='https://www.html.com' target="_blank">HTML</a></li> <li><a href='https://www.javascript.com' target="_blank">JavaScript</a></li> <li><a href='https://www.perl.com' target="_blank">Perl</a></li> <li><a href='https://www.php.com' target="_blank">PHP</a> <ul> <li><a href='https://www.javascript.com' target="_blank">JavaScript Source</a></li> <li><a href='https://www.javascript.com' target="_blank">WebReference JavaScript Articles</a></li> <li><a href='https://www.javascript.com' target="_blank">JavaScript.com</a></li> </ul> </li> </ul> </li> <li><a href='https://www.web.com' target="_blank">Miscellaneous Web Sites</a> <ul> <li><a href='https://www.counter.com' target="_blank">The Counter</a></li> <li><a href='https://www.guestbook.com' target="_blank">The Guestbook</a></li> <li><a href='https://www.isp.com' target="_blank">The List of ISPs</a></li> <li><a href='https://www.jobs.com' target="_blank">Internet Jobs</a></li> </ul> </li> </ul> This above code is working fine in terms of expanding and collapsing. But when I am trying to expand/collapse by clicking on the word or "+" button, then it is clicking the associated link also and opening a new page which I do not want. only "+" or "-" button will do the expanding and collapsing and if I click on the word then only the link should open. A: I would add the click and the icon both on the list element instead of the anchor tag. If you add it to the anchor tag, that will trigger the click on the URL. // For every list item that is expandable, toggle the open class on click var expandableArray = Array.from(document.getElementsByClassName("expand")); for(var index in expandableArray) { var list = expandableArray[index]; list.addEventListener('click', function(e) { e.currentTarget.classList.toggle('open'); }); } ul { list-style:none; } li a { text-decoration: none; color:black; font-family: verdana; } li:not(.open) ul { /* Work with classes to open and close the list */ display:none; } li.expand:before { /* Set the icon on the list tag */ content: "+ "; /* the + icon is not an anchor tag, so set the cursor manually */ cursor:pointer; } li.expand.open:before { content: "- "; } <ul> <li class="expand"><a href="#">test</a> <ul> <li><a href="#">test1</a></li> <li><a href="#">test1</a></li> <li><a href="#">test1</a></li> </ul> </li> <li class="expand"><a href="#">test</a> <ul> <li><a href="#">test1</a></li> <li><a href="#">test1</a></li> <li><a href="#">test1</a></li> </ul> </li> <li><a href="#">test</a></li> </ul>
Q: Binary Search Tree Destructor issue I am currently making a code a class code for binary search tree but I am getting an error in the destructor for my BST class. This is my relevant part of code: Node Struct: struct Node{ int key; struct Node* left; struct Node* right; }; Function to Create new node: Node* BST::CreateNode(int key){ Node* temp_node = new Node(); temp_node->key = key; temp_node->left = nullptr; temp_node->right = nullptr; return temp_node; } Assignment Operator: BST& BST::operator=(const BST& cpy_bst){ if (this != &cpy_bst){ Node* cpy_root = cpy_bst.root; this->root=assgRec(cpy_root, this->root); } return *this; } Node* BST::assgRec(Node* src_root, Node* dest_root){ if (src_root != nullptr){ dest_root = CreateNode(src_root->key); dest_root->left=assgRec(src_root->left, dest_root->left); dest_root->right=assgRec(src_root->right, dest_root->right); } return src_root; } Destructor: BST::~BST(){ DestroyNode(root); } void BST::DestroyNode(Node* r){ if (r != nullptr){ DestroyNode(r->left); DestroyNode(r->right); delete r; } } The problem is that after I have used the assignment in main function, like: BST bin_tree2=bin_tree1; The destructor is called but after it deletes the data in bin_tree1, all values that were placed in bin_tree2 have some junk values in them and I get an error on that part. Any help would be greatly appreciated. Thanks A: This looks like you are copying pointers and accessing them after memory has been deallocated. The problem seems not to be with the keys as I said earlier but with the nodes that seem to be improperly constructed in the BST::assgRec function.
Q: Is 'malarkey' an acceptable word to use today? Is it acceptable today to use 'malarkey' to describe an idea that is nonsensical? Or are there better terms to use? A: Joe Biden seems to use it pretty regularly. It's not a common word, but one that is widely understood. It has a feel of being playfully old-fashioned. A: The word 'malarkey' (less common, and mainly US, 'malarky') is common in British English, as used in the UK, Australia, New Zealand, Ireland, etc, and is used by Americans including Joe Biden recently. It is often used to discuss something considered nonsensical, pointless or a waste of time I wanted to own an elephant but I found out I had to complete a lot of forms and buy a wildlife licence, and I couldn't be bothered with all that malarkey. I love the flavours of a Bakewell tart, but I really can't be bothered to faff around with making pastry and baking it blind and all that malarkey. He thinks that everything politicians say is a bunch of malarkey. I like the socializing but I can't be bothered with dressing up and all that malarkey. Malarkey (Lexico) Malarkey (Merriam-Webster) A: According to Ngram Viewer statistics, the word malarkey becomes more popular through years, however, I would recommend that you use other more common synonyms like hogwash or claptrap (if you do not want to use nonsense as the most obvious description):
Q: gluOrtho2D and glViewport I have an object defined in world coordinates, say a circle centered at (2,3) with radius 4. If I want the circle to not be distorted, to be entirely visible in the viewport and to be as big as possible within the viewport, how can I formulate a gluOrtho2D command to create a world window based on the aforementioned specs given that: glViewport(20, 30, 1000, 500)? I am getting confused with the whole viewport vs world vs screen, etc coordinates. Can someone walk me through it? I really want to get the hang of this. A: In your example, the viewport is 1000 pixels across by 500 pixels high. So you need to specify glOrtho coordinates that have the same aspect ratio (2:1). Your circle is 4 units in radius, so you need a view that is 8 units high by 8 units wide atleast. Considering the 2:1 aspect ratio, let's make that 16 units wide by 8 units high. The center is at (2, 3). So centering these 16 x 8 around that you should get: glOrtho2D (2 - 8, 2 + 8, 3 - 4, 3 + 4); That is: glOrtho2D (-6, 10, -1, 7); This effectively maps the X coordinate of -6 to the left edge of the viewport. The glViewport mapping then maps that to the actual location on the screen. As the screen size changes, you must adjust the glOrtho2D coordinates to compensate for the aspect ratio, but as long as the viewport is 2:1, these glOrtho2D calls will not need to change.
Q: Get system local timezone in python Seems strange, but I cannot find an easy way to find the local timezone using pandas/pytz in Python. I can do: >>> pd.Timestamp('now', tz='utc').isoformat() Out[47]: '2016-01-28T09:36:35.604000+00:00' >>> pd.Timestamp('now').isoformat() Out[48]: '2016-01-28T10:36:41.830000' >>> pd.Timestamp('now').tz_localize('utc') - pd.Timestamp('now', tz='utc') Out[49]: Timedelta('0 days 01:00:00') Which will give me the timezone, but this is probably not the best way to do it... Is there a command in pytz or pandas to get the system time zone? (preferably in python 2.7 ) A: Quite a few locale time related settings from OS level is covered by time module import time # Since Python 3.3 local_time = time.localtime() # returns a `time.struct_time` tzname_local = local_time.tm_zone # 'EST' dst = local_time.tm_isdst # _from docs_: may be set to 1 when daylight savings time is in effect, # and 0 when it is not. A value of -1 indicates that this is not known, # and will usually result in the correct state being filled in. tm_gmtoff and tm_zone attributes are available on platforms with C library supporting the corresponding fields in struct tm. see: https://docs.python.org/3/library/time.html#time.struct_time # At least from Python 2.7.18 local_tzname = time.tzname # 'EST' A tuple of two strings: the first is the name of the local non-DST timezone, the second is the name of the local DST timezone. If no DST timezone is defined, the second string should not be used. see: https://docs.python.org/2.7/library/time.html#time.tzname) Another trick is to use datetime.now().astimezone() as found here and the reason why it fails on python 2.x from datetime import datetime # Python 3 will return a datetime with local timezone, local_now = datetime.now().astimezone() # Doesn't work on python 2.x # datetime.now().astimezone() -> TypeError: Required argument 'tz' (pos 1) not found # datetime.now().astimezone(dateutil.tz.UTC) -> ValueError: astimezone() cannot be applied to a naive datetime local_tz = local_now.tzinfo # datetime.timezone local_tzname = local_tz.tzname(local_now) print(local_tzname) A: I don't think this is possible using pytz or pandas, but you can always install python-dateutil or tzlocal: from dateutil.tz import tzlocal datetime.now(tzlocal()) or from tzlocal import get_localzone local_tz = get_localzone() A: time.timezone should work. The offset of the local (non-DST) timezone, in seconds west of UTC (negative in most of Western Europe, positive in the US, zero in the UK). Dividing by 3600 will give you the offset in hours: import time print(time.timezone / 3600.0) This does not require any additional Python libraries. A: I have found that in many cases this works: (Since Python 3.6) from datetime import datetime # use this extension and it adds the timezone tznow = datetime.now().astimezone() print(tznow.isoformat()) 2020-11-05T06:56:38.514560-08:00 # It shows that it does have a valid timezone type(tznow.tzinfo) <class 'datetime.timezone'> I find this handy as it does not depend on external packages. It appears to work only in Python3 (but not in Python2) A: While it doesn't use pytz/Pandas, the other answers don't either, so I figured I should post what I'm using on mac/linux: import subprocess timezone = subprocess.check_output("date +%Z") Benefits over the other answers: respects daylight savings time, doesn't require additional libraries to be installed.
Q: Three screw problem There are three identical screws with diffrent amounts of nuts and disks on them. Here is the problem picture: How do you calculate the weight of a screw, the nuts and the disks? A: Considering the three images, we can say \begin{cases} 1B+3D+1H &=55\\ 1B+1D+2H &=42\\ 1B+2D+3H &=56 \end{cases} Where $B$ stands for the weight of bold, $D$ the disk's and $H$ the hexagon piece. Solving the system, we have \begin{cases} B &=23,\\ D &=9,\\ H &=5. \end{cases}
Q: Magento Service temporary unavailable I'm trying to request the products from Magento using REST-API for Android I've create the authentication using Scribe. I read that is not meant for Android, therefore, I searched more and found signpost. here is my code private void requestOAuth(String url) { OkHttpOAuthConsumer consumer = new OkHttpOAuthConsumer(MAGENTO_API_KEY, MAGENTO_API_SECRET); consumer.setTokenWithSecret(getAccessToken().getToken(), getAccessToken().getSecret()); OkHttpClient client = new OkHttpClient.Builder() .addInterceptor(new SigningInterceptor(consumer)) .build(); final Request request = new Request.Builder() .url(url).build(); try { Request signedRequest = (Request) consumer.sign(request).unwrap(); Call call = client.newCall(signedRequest); Logger.i(TAG, ">>>> oAuth URL = " + url); Response response = call.execute(); String responseString = response.body().string(); Logger.i(TAG, "oAuth = " + response.code() + " oAuth Request ___ response = " + responseString); } catch (OAuthMessageSignerException | OAuthExpectationFailedException | OAuthCommunicationException | IOException e) { Logger.e(e); } } I always get "Service temporary unavailable" and code 500. I activated the guest to test it, and I found out that I have to send header "accept" > "application/json", otherwise it does not work as guest. I've tried that while requesting with authentication, but it responded with something like "user role not available". I've done the part in Magento where I create a role, give it permission..etc. (hence authentication is done and I've token and token secrete). Can anyone suggest why it doesn't work? I've been searching for days and trying a lot of things now, and nothing worked... Is there any header that I must send?
Q: Select an array of elements in jQuery I need to wrap up an array of elements into a jQuery object as if they were selected so that I can call various jQuery actions on them. I'm looking for a function like foo below that accepts an array of elements and returns a jQuery object with them in it. var elements = [element1, element2, element3]; $(foo(elements)).click(function() { ... }); Can someone shed some light on this? Thanks much. A: Just do $(elements).click( function(){ ... }); if your elements are actual references to the DOM demo: http://jsfiddle.net/gaby/dVKEP/ A: Use jQuery.each Example: $.each(elements, function(index, element) { $(element).doStuff(); }); A: Use each to iterate over both objects and arrays var elements = ['element1', 'element1', 'elements3']; $.each(elements, function(index, value) { alert(index + ': ' + value); }); Check working example at http://jsfiddle.net/LpZue/
Q: How does Rails process images uploaded by a form? Very simple question.. strangely can't find any intuitive answers anywhere. I got a HTML form with which allows users to upload an image. When this form is submitted and goes to a Rails controller, how do I get the image? Suppose I want it in base64. When I do: image = params["image"]. I just get a filename... but where is this file? is it in my server? How do I then convert this to base64? I guess the conversion is easy once I know where this file actually is in my server... A: params['image'] should be an instance of Rack::Multipart::UploadedFile, so you should be able to access the path on disk by doing params['image'].path. P.S.: To save a character, most prefer to use symbols since most Rails hashes are HashWithIndifferentAccess and can be accessed using a symbol or a key. So params[:image].path :-)
Q: Error in Textbook Appendix; Or my Error? I'm taking a discrete mathematics course. We are covering logic and proofs in the current section, specifically argument form and validity. I am doing one of the practice problems which is similar to an assigned problem that I must turn in for credit, and the provide answer to the practice problem seems to contradict what is said in the text, as well as contradict itself. There is either a mistake, or I am somehow mistaken. So here is the basis of what I'm doing. The problem is to construct a truth table from a set of provided premises and a provided conclusion. The truth table will indicate whether or not the the argument has valid form. This is done determining if the truth value of the conclusion is true wherever all premises are true. I am also instructed to highlight the premises in the truth table. So when setting up the truth table I break the premises down to there individual components and expand out to the whole of the premises. The most basic of individual components are p, q, and r. But then there are inverses and possible combinations of these that may be part of the problem, but as I understand, are not part of the premises as a whole. In the appendix all components other than the most basic p, q, and r, are being marked as the premises. Earlier in the chapter it is not explained this way. One is not to even test the conclusion for it's truth value unless all premise of the same row are true, and some of the components in which the example has marked as being part of the premises aren't even true, and the book still tests the conclusion for it's truth value there. What is actually part of what I believed to be the premise is in fact true in these rows though, making it rightly so that the conclusion should be tested for it's true or false value. I hope what I am explaining is understandable, as I'm new to this branch of mathematics so I'm a little fuzzy on my explaining abilities here. I will upload some pictures of what's happening. From the chapter: The above example shows that the only part being marked as the premises are the actual parts that are wholly in the originating argument. This is from the textbooks chapter on this subject . From the appendix: The above example is from the Appendix which provides an example answer to one the questions at the end of the chapter. As you can see it has marked the individual component of NOT q as one of the premises. NOT q does not show up on its own in the individual argument, and it isn't even true in the last 2 rows in which the conclusion was tested. What I perceive to be the premises are the three columns to the right of NOT q. I don't think NOT q is supposed to be part of the premises. I just want to get it right so I can turn in accurate work. Either the answer to the example question provided is wrong, because it seems to differ from what the example in the chapter shows, or I am missing something. Please advise. A: It's a typo in the part you quote from the appendix. The column headed $\sim p$ should not have been part of the "premises" bracket. (Note that this column does not even have a T in the line that the blue comment states to "have true premises").
Q: Abrir vista en una nueva pestaña HTML Estoy tratando de abrir una nueva pestaña siempre y cuando se cumplan ciertas condiciones. Para ello, tengo entendido que se usa target="_blank" en el form del html, el problema con esto es que se abrirán independientemente de si se cumple o no la condición. ¿Cómo puedo hacer para que solo suceda cuando se cumpla la condición? Este es mi ActionResult con el que controlo las condiciones: public ActionResult ChAzul(double? titulo) { ConexionSQL cn = new ConexionSQL(); var suscriptor = cn.cargarDatos(Convert.ToDouble(titulo)); var caracteres = Convert.ToString(titulo).Length; string uname = string.Empty; if (Session["uname"] != null) { uname = Convert.ToString(Session["uname"]); } var usuario = cn.datosCob(uname); if (uname == string.Empty) { return RedirectToAction("Index", "Home"); } else if (usuario[0].conectado == false) { return RedirectToAction("Index", "Home"); } else if (caracteres <= 3 || caracteres > 6) { ViewBag.Alert = "La cantidad de caracteres no puede ser menor a 4 (cuatro) ni mayor a 6 (seis)."; return View("Cuotas", usuario); } else if (suscriptor.Count <= 0) { ViewBag.Alert = "Lo sentimos, este título no existe."; return View("Cuotas", usuario); } else { return View("ChAzul", suscriptor); } } Solo en esta línea return View("ChAzul", suscriptor); es cuando se debe abrir la nueva pestaña, ¿Cómo puedo lograrlo? Esta es mi vista por si sirve de algo: <form id="frmCU" method="post" action="@Url.Action("ChAzul", "Home")"> <label for="titulo">Título: </label> <input type="number" id="titulo" oninput="javascript: if (this.value.length > this.maxLength) this.value = this.value.slice(0, this.maxLength);" name="titulo" maxlength="6" placeholder="Ingrese su título..." required title="Sólo letras y números. Cantidad mínima de caracteres: 4. Cantidad máxima de caracteres: 5" onkeypress="return soloNumeros(event)" autofocus> <input type="submit" value="Buscar"/> @if (ViewBag.Alert != null) { <div class="alert"> <span class="closebtn">&times;</span> <strong>Providus informa: </strong> <p id="textoAlerta">@ViewBag.Alert</p> </div> } </form> A: Puedes ejecutar JavaScript para abrir nueva ventana. Por ejemplo, añadimos a tu código un nuevo valor en ViewBag, como ViewBag.Encontrado = true. (puede ser otro valor, como string.Empty) public ActionResult ChAzul(double ? titulo) { ConexionSQL cn = new ConexionSQL(); var suscriptor = cn.cargarDatos(Convert.ToDouble(titulo)); var caracteres = Convert.ToString(titulo).Length; string uname = string.Empty; if (Session["uname"] != null) { uname = Convert.ToString(Session["uname"]); } var usuario = cn.datosCob(uname); if (uname == string.Empty) { return RedirectToAction("Index", "Home"); } else if (usuario[0].conectado == false) { return RedirectToAction("Index", "Home"); } else if (caracteres <= 3 || caracteres > 6) { ViewBag.Alert = "La cantidad de caracteres no puede ser menor a 4 (cuatro) ni mayor a 6 (seis)."; return View("Cuotas", usuario); } else if (suscriptor.Count <= 0) { ViewBag.Alert = "Lo sentimos, este título no existe."; return View("Cuotas", usuario); } else { ViewBag.Encontrado = true; // aquí está el valor de control // y ahora regresamos al formulario (deberás indicar la vista correcta) return View(); } } Después, en el archivo cshtml que contiene el formulario incluimos un código JavaScript para abrir la nueva vista en otra ventana. Si existe el valor de ViewBag.Encontrado entonces se ejecutará ese código: @if (ViewBag.Encontrado != null) { <script> var miRedirect = document.createElement('a'); miRedirect.setAttribute('href', '/ChAzul/Index?parametro01=valor01&parametro02=valor02'); miRedirect.setAttribute('target', '_blank'); miRedirect.click(); </script> } El código JavaScript es muy simple: crea un enlace a la ruta ChAzul/Index con 2 parámetros (deberás cambiar el valor del atributo HREF a tus necesidades) y se abrirá en otra pestaña. Si te preocupa la seguridad, recuerda comprobar en tus acciones si el usuario está logueado y si tiene permisos, etc (tal y como demuestras en tu código).
Q: flask db upgrade not working inside of docker container Generally do not post here, so forgive me if anything is not up to code, but I have built a micro-service to run database migrations using flask-migrate/alembic. This has seemed like a very good option for the group I am working with. Up until very recently, the micro-service could be deployed very easily by pointing to different databases and running upgrades, but recently, the flask db upgrade command has stopped working inside of the docker container. As can be seen I am using alembic-utils here to handle some aspects of dbmigrations less commonly handled by flask-migrate like views/materialized views etc. Dockerfile: FROM continuumio/miniconda3 COPY ./ ./ WORKDIR /dbapp RUN conda update -n base -c defaults conda -y RUN conda env create -f environment_py38db.yml RUN chmod +x run.sh ENV PATH /opt/conda/envs/py38db/bin:$PATH RUN echo "source activate py38db" > ~/.bashrc RUN /bin/bash -c "source activate py38db" ENTRYPOINT [ "./run.sh" ] run.sh: #!/bin/bash python check_create_db.py flask db upgrade environment_py38db.yml: name: py38db channels: - defaults - conda-forge dependencies: - Flask==2.2.0 - Flask-Migrate==3.1.0 - Flask-SQLAlchemy==3.0.2 - GeoAlchemy2==0.12.5 - psycopg2 - boto3==1.24.96 - botocore==1.27.96 - pip - pip: - retrie==0.1.2 - alembic-utils==0.7.8 EDITED TO INCLUDE OUTPUT: from inside the container: (base) david@<ip>:~/workspace/dbmigrations$ docker run --rm -it --entrypoint bash -e PGUSER="user" -e PGDATABASE="trial_db" -e PGHOST="localhost" -e PGPORT="5432" -e PGPASSWORD="pw" --net=host migrations:latest (py38db) root@<ip>:/dbapp# python check_create_db.py successfully created database : trial_db (py38db) root@<ip>:/dbapp# flask db upgrade from local environment (py38db) david@<ip>:~/workspace/dbmigrations/dbapp$ python check_create_db.py database: trial_db already exists: skipping... (py38db) david@<ip>:~/workspace/dbmigrations/dbapp$ flask db upgrade INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> 41f5be29ae44, initital migration to generate tables INFO [alembic.runtime.migration] Running upgrade 41f5be29ae44 -> 34c067400f6b, add materialized views <. . .> INFO [alembic.runtime.migration] Running upgrade 34c067400f6b -> 34c067400f6b_views, add <. . .> INFO [alembic.runtime.migration] Running upgrade 34c067400f6b_views -> b51d57354e6c, add <. . .> INFO [alembic.runtime.migration] Running upgrade b51d57354e6c -> 97d41cc70cb2, add-functions (py38db) david@<ip>:~/workspace/dbmigrations/dbapp$ As the output shows, flask db upgrade is hanging inside the docker container while running locally. Both environments are reading in the db parameters from environment variables, and these are being read correctly (the fact that check_create_db.py runs confirms this). I can share more of the code if you can help me figure this out. For good measure, here is the python script: check_create_db.py import psycopg2 import os def recreate_db(): """ checks to see if the database set by env variables already exists and creates the appropriate db if it does not exist. """ try: # print statemens would be replaced by python logging modules connection = psycopg2.connect( user=os.environ["PGUSER"], password=os.environ["PGPASSWORD"], host=os.environ["PGHOST"], port=os.environ["PGPORT"], dbname='postgres' ) connection.set_session(autocommit=True) with connection.cursor() as cursor: cursor.execute(f"SELECT 1 FROM pg_catalog.pg_database WHERE datname = '{os.environ['PGDATABASE']}'") exists = cursor.fetchone() if not exists: cursor.execute(f"CREATE DATABASE {os.environ['PGDATABASE']}") print(f"successfully created database : {os.environ['PGDATABASE']}") else: print(f"database: {os.environ['PGDATABASE']} already exists: skipping...") except Exception as e: print(e) finally: if connection: connection.close() if __name__ == "__main__": recreate_db() A: Ok, so I was able to find the bug easily enough by going through all the commits to isolate when the program stopped working and it was an easy fix. It has however left me with more questions. The cause of the problem was that in the root directory of the project ( so dbmigrations if you are following above..) I had added an __init__.py. This was unnecessary, but I thought it might help me access database objects defined outside of the env.py in my migrations directory after adding the path to my sys.path in env.py. This was not required, and I probably shouldv'e known not to add the __init__.py to a folder I did not intend to use a python module. I continue to find it strange is that the project still ran perfectly fine locally, with the same __init__.py in the root folder. However, from within the docker container, this cause the flask-migrate commands to be unresponsive. This remains a point of curiosity. In any case, if you are feeling like throwing an __init__.py in the root directory of a project, here is a data point that should discourage you from doing so, and it would probably be poor design to do so in most cases anyway.
Q: Create new file containing filename and count each file I need to create a new file_count.txt containing filename and line count. Directory Structure $ find asia emea -name \*.gz asia/2013/emp_asia_13.txt.gz asia/2015/emp_asia_15.txt.gz asia/2014/emp_asia_14.txt.gz emea/2013/emp_emea_13.txt.gz emea/2015/emp_emea_15.txt.gz emea/2014/emp_emea_14.txt.gz The output file should be like: emp_asia_13.txt.gz 20 emp_asia_15.txt.gz 15 emp_asia_14.txt.gz 50 emp_emea_13.txt.gz 32 emp_emea_15.txt.gz 26 emp_emea_14.txt.gz 70 A: Solution using a for loop for file in $(find asia emea -name \*.gz -print0 | xargs -0) do echo -n $(basename $file); gunzip -c $file |wc -l; done >> file_count.txt In one line, it gives: $ for file in $(find asia emea -name \*.gz -print0 | xargs -0); do echo -n $(basename $file); gunzip -c $file |wc -l; done >> file_count.txt And the output is: $ cat file_count.txt emp_asia_13.txt.gz 4 emp_asia_14.txt.gz 10 emp_emea_15.txt.gz 17 A: You could also try: find asia emea -type f -name "*gz" | while IFS= read -r fname; do printf "%s %s\n" "$fname" $(gzip -dc "$fname" | wc -l) >> file_count.txt done which as a 1-liner would be: find asia emea -type f -name "*gz" | while IFS= read -r fname; do printf "%s %s\n" "$fname" $(gzip -dc "$fname" | wc -l) >> file_count.txt; done A: To run shell stuff on the results of find in a way that doesn't break on any special characters, you can use find -exec sh -c .... (see below). In this case, you don't really need that, if you can use bash's extglob to match in subdirectories for you. I just realized this is a ksh question, and IDK if it has something equivalent. shopt -s extglob for i in {asia,emea}/**/*.gz;do bn=${i##*/} # basename printf "%s %s\n" "$bn" "$(zcat "$i"|wc -l)" # stolen from David's answer done > linecounts.txt # redirect once outside the loop. This is like David's answer, except it will successfully count lines even in files with names containing a newline. The output file will be a mess, though, because newline is the usual record separator for textual data, so having it in filenames is just asking for trouble. If you know your directory structure, you don't need extglob and can just use */*/*.gz. Optionally with some leading characters to cut off some subdir searches. (bash isn't as smart as find when traversing directories, either. It always stats everything to see if it's a directory, even on filesystems that fill in the d_type field in readdir(3) results.) Note that with extglob, you do need dir/**/*.gz, not just dir/**.gz More generally, you can use find with xargs and shell commands by having xargs run sh -c, and then inside that -c, loop over the positional paramaters. for i does that implicitly; i.e. it's equivalent to for i in "$@". find -name '*.gz` -print0 | xargs -0 bash -c 'for i in "$@";do ...loop body from above...;done > linecounts.txt' bash You can simplify this to having find run sh -c itself, if you have a find that supports the + terminator for -exec (to put a list of matches onto one command line): find -name '*.gz` -exec bash -c 'for i in "$@";do ...loop body from above...;done > linecounts.txt' bash {} + In both cases, you need a dummy arg before the args from find or xargs, because that will end up as argv[0] (traditionally the command name).
Q: Laravel Eloquent - Array in a Nested Relation, how to assign it? I have a game database where a match have two teams (team_home_id and team_away_id). Users can make predictions for those games. So the model Match has this: public function teamHome() { return $this->belongsTo(Team::class, 'team_home_id'); } public function teamAway() { return $this->belongsTo(Team::class, 'team_away_id'); } I want to get all predictions from a specific user for this match. The prediction model have a polymorhpic table (because there are predictions for match and predictions for overall season). In controller I do this: $matches = Match::where('match_day', $match_day)->with('teamHome', 'teamAway', 'predictions')->get(); foreach ($matches as $match) { echo $match->teamHome->code . '-'; echo $match->teamAway->code . ' '; // echoing is just for testing purpose //dd($match->predictions); // this is the problem, what is the best way? } I got an array with the predictions that I'm not able to assign to specific user or only with manual foreach and assign it to a different array. But I don't think this would be a good way. How I can solve this problem? Thanks in advance. A: There are several ways to solve your problem. This is just one: You can use a constraining eager load, that is, "only bring predictions that are from user X". $matches = Match::where('match_day', $match_day) ->with('teamHome', 'teamAway') ->with(['predictions' => function($prediction) use($userId){ $prediction->where('user_id', '=', $userId); }]) ->get(); Source
Q: XML Serialization in Unity - Have an array hold different arrayitems? So I'm working on my game in Unity, and I encountered a problem regarding XML. I've setup a system thanks to a tutorial that allows me to create items by reading their data from an XML database. But there is one problem. I want to set up my XML file to be read as follows: <resistance> <physical> <phy>60.0</phy> </physical> <air> <air>50.0</air> </air> </resistance> However, I have not found a way to set the as the root to check the data in. The format of the XML file is as follows: <Item> <id>0</id> <name>Helmet</name> <resistance> <physical> <phy>60.0</phy> </physical> <air> <air>50.5</air> </air> </resistance> </Item> [XmlArray("resistance"), XmlArrayItem("physical")] only reads the part. I've also tried writing everything as follows: [XmlArray("resistance"), XmlArrayItem("physical"), XmlArrayItem("phy")] public float[] phyres; [XmlArray("air"), XmlArrayItem("air")] public float[] airres; But the XML file then got messy, although the data was read and I got the correct resistances, what followed after was not read, as if resistance became the new permament root of the XML file. Thank you in advance. Edit: In other words, I want to have a subroot in my the child, and hold a few different arrays there. Edit:Edit: Thank you jdweng, this ended up more simple to write: [XmlElement("resistance"), XmlArrayItem("physical")] public float[] phyres; [XmlElement("air")] public float[] airres; But I still get the same issue. The root/namespace is set to , and everything is read from that namespace afterwards. Not even the after affects it. A: As I read it, your requirement is a resistance element that contains an a physical and an air element. This: [XmlElement("resistance"), XmlArrayItem("physical")] public float[] phyres; [XmlElement("air")] public float[] airers; Doesn't represent that. It implies a resistance element containing multiple physical elements followed by an air element. Here is a class structure that mirrors your XML: public class Item { [XmlElement("id")] public int Id { get; set; } [XmlElement("name")] public string Name { get; set; } [XmlElement("resistance")] public Resistance Resistance { get; set; } } public class Resistance { [XmlArray("physical")] [XmlArrayItem("phy")] public float[] Phyres { get; set; } [XmlArray("air")] [XmlArrayItem("air")] public float[] Air { get; set; } }
Q: Python script not stopping on sys.exit() I wrote a script that can draw polylines (it over on github) from ScalableVectorGraphics(.svg) by moving the Mouse accordingly. When you are handing the control over your mouse to a script, a killswitch is certainly necessary, so I found an example for a Keyboard listener somwhere on the Internet: def on_press(key): try: sys.exit() print('alphanumeric key {0} pressed'.format(key.char)) print('adsfadfsa') except AttributeError: print('special key {0} pressed'.format( key)) def main(): listener = keyboard.Listener( #TODO fix sys.exit() on_press=on_press) listener.start() if __name__ == '__main__': main() It seems to be working: If i add a a print statement before the sys.exit() it is instantly executed properly. But with sys.exit() it keeps moving my mouse and the python interpreter is still on Taskmanager. I don't know why it keeps executing. Thank you in advance for your suggestions. MrSmoer Solution was: os._exit(1) Full Sourcecode: from pynput import mouse as ms from pynput import keyboard from pynput.mouse import Button, Controller import threading import time from xml.dom import minidom import re sys.path.append('/Users/MrSmoer/Desktop/linedraw-master') mouse = ms.Controller() tlc = None brc = None brc_available = threading.Event() biggestY = 0 biggestX = 0 drwblCnvsX = 0 drwblCnvsY = 0 def on_click(x, y, button, pressed): if not pressed: # Stop listener return False def on_press(key): try: sys.exit() print('alphanumeric key {0} pressed'.format(key.char)) print('adsfadfsa') except AttributeError: print('special key {0} pressed'.format( key)) def initialize(): print("Please select your programm and then click at the two corners of the canvas. Press any key to cancel.") with ms.Listener( on_click=on_click) as listener: listener.join() print('please middleclick, when you are on top left corner of canvas') with ms.Listener( on_click=on_click) as listener: listener.join() global tlc tlc = mouse.position print('please middleclick, when you are on bottom left corner of canvas') with ms.Listener( on_click=on_click) as listener: listener.join() global brc brc = mouse.position mouse.position = tlc print('thread finished') brc_available.set() def getDrawabableCanvasSize(polylines): global biggestX global biggestY for i in range(len(polylines)): # goes throug all polylines points = hyphen_split(polylines[i]) # Splits polylines to individual points for c in range(len(points)): # goes throug all points on polyline cord = points[c].split(',') # splits points in x and y axis if float(cord[0]) > (biggestX - 5): biggestX = float(cord[0]) + 5 if float(cord[1]) > (biggestY - 5): biggestY = float(cord[1]) + 5 print('TLC: ', tlc) print('bigX: ', biggestX) print('bigY: ', biggestY) cnvswidth = tuple(map(lambda i, j: i - j, brc, tlc))[0] cnvsheight = tuple(map(lambda i, j: i - j, brc, tlc))[1] cnvsapr = cnvswidth / cnvsheight print('Canvasaspr: ', cnvsapr) drwblcnvaspr = biggestX / biggestY print('drwnble aspr: ', drwblcnvaspr) if drwblcnvaspr < cnvsapr: # es mus h vertikal saugend print('es mus h vertikal saugend') finalheight = cnvsheight finalwidth = finalheight * drwblcnvaspr else: # es muss horizontal saugend, oder aspect ratio ist eh gleich print('es muss horizontal saugend, oder aspect ratio ist eh gleich') finalwidth = cnvswidth scalefactor = finalwidth / biggestX print(scalefactor) return scalefactor def drawPolyline(polyline, scalefactor): points = hyphen_split(polyline) #print(points) beginpoint = tlc for c in range(len(points)): # goes throug all points on polyline beginpoint = formatPoint(points[c], scalefactor) if len(points) > c + 1: destpoint = formatPoint(points[c + 1], scalefactor) mouse.position = beginpoint time.sleep(0.001) mouse.press(Button.left) # time.sleep(0.01) mouse.position = destpoint # time.sleep(0.01) mouse.release(Button.left) else: destpoint = tlc #print("finished line") mouse.release(Button.left) def formatPoint(p, scale): strcord = p.split(',') #print(scale) #print(tlc) x = float(strcord[0]) * scale + tlc[0] # + drwblCnvsX/2 y = float(strcord[1]) * scale + tlc[1] # + drwblCnvsY/2 #print('x: ', x) #print('y: ', y) thistuple = (int(x), int(y)) return thistuple def hyphen_split(a): return re.findall("[^,]+\,[^,]+", a) # ['id|tag1', 'id|tag2', 'id|tag3', 'id|tag4'] def main(): listener = keyboard.Listener( #TODO fix sys.exit() on_press=on_press) listener.start() thread = threading.Thread(target=initialize()) #waits for initializing function (two dots) thread.start() brc_available.wait() # print(sys.argv[1]) doc = minidom.parse('/Users/MrSmoer/Desktop/linedraw-master/output/out.svg') # parseString also exists try: if sys.argv[1] == '-ip': doc = minidom.parse(sys.argv[2]) except IndexError: print('Somethings incorrect1') polylines = NotImplemented try: doc = minidom.parse('/Users/MrSmoer/Desktop/linedraw-master/output/out.svg') # parseString also exists # /Users/MrSmoer/Desktop/linedraw-master/output/output2.svg #doc = minidom.parse('/Users/MrSmoer/Desktop/Test.svg') polylines = [path.getAttribute('points') for path in doc.getElementsByTagName('polyline')] doc.unlink() except: print('Somethings incorrect3') # print(polylines) scalefactor = getDrawabableCanvasSize(polylines) for i in range(len(polylines)): drawPolyline(polylines[i], scalefactor) if __name__ == '__main__': main() A: Sometimes, when writing a multithreadded app, raise SystemExit and sys.exit() both kills only the running thread. On the other hand, os._exit() exits the whole process. While you should generally prefer sys.exit because it is more "friendly" to other code, all it actually does is raise an exception. If you are sure that you need to exit a process immediately, and you might be inside of some exception handler which would catch SystemExit, there is another function - os._exit - which terminates immediately at the C level and does not perform any of the normal tear-down of the interpreter A simple way to terminate a Python script early is to use the built-in quit() function. There is no need to import any library, and it is efficient and simple. Example: #do stuff if this == that: quit() You may try these options!! Hope it works fine!! If not tell us, we will try more solutions! A: That sys.exit() ends up killing the thread that it's executing it, but you program seems to take advantage of multiple threads. You will need to kill all the threads of the program including the main one if you want to exit.
Q: respondsToSelector failing I have an XML callback selector that seems to fail at the respondsToSelector test and I am not sure why. Why is the call failing? The callback is set like so: [handler setXMLCallBackDelegate:self :@selector(gotXMLCallback)]; The callback is defined like so (in calling class): -(void)gotXMLCallback:(id)sender{ NSLog(@"CALLBACK YAY"); } And the callback is called using this code (from within handler): if (gotXMLCallback && gotXMLCallbackSelector && [gotXMLCallback respondsToSelector:gotXMLCallbackSelector]) { (void) [gotXMLCallback performSelector:gotXMLCallbackSelector withObject:self]; } A: The colon is part of the selector, so it should be @selector(gotXMLCallback:). A: To stablish a selector you should call it [gotXMLCallback performSelector:@selector(gotXMLCallbackSelector:) withObject:self];
Q: How to have background agent app run function hourly on macOS? I am developing an app and I would like to run function hourly on macOS (at 8:00, 9:00, 10:00, …). I used to use ~/Library/LaunchAgents/…, but notifications are broken when app is not running in background. Similar to Linux cron job… is that possible? A: If you'd like it when you're logged in, use a Launch Agent. If you want it even when no one is logged in, use a Launch Daemon. See Creating Launch Daemons and Agents in the Daemons and Services Programming Guide. Ultimately, you'll create a plist file such as this one (from the docs) that specifies the interval you want (very much like cron): <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.example.touchsomefile</string> <key>ProgramArguments</key> <array> <string>touch</string> <string>/tmp/helloworld</string> </array> <key>StartCalendarInterval</key> <dict> <key>Minute</key> <integer>45</integer> <key>Hour</key> <integer>13</integer> <key>Day</key> <integer>7</integer> </dict> </dict> </plist> This will go in /Library/LaunchDaemons, /Library/LaunchAgents or ~/Library/LaunchDaemons, depending on whether you want it tied to the whole system, all users, or just one user. Note that Launch Daemons have no access to the windowing system, so it's hard for them to do things like launch programs. They also may be more limited than you'd expect to user data. (Running as root can give you less access than running as a user.) See also man launchctl for loading and unloading them by hand, and monitoring them generally.
Q: jquery insertafter multidimensional array I have a text form field. It can be cloned/duplicated. It generates name tag like user[name][1][1],user[name][2][1],user[name][3][1] etc... I want to append a link next to that text fields using jquery. I tried like this.. <script type="text/javascript"> $(function(){ $('<a href="#">Example link</a>').insertAfter('input[name="user\[name\]\[\]\[\]"]') }); </script> But its not working. Can anyone help me?. Thanks A: Per the jQuery selector docs, special characters need to be escaped with double backslash. For all of the elements at once you can use the startsWith selector and do: $('input[name^="user\\[name\\]"]'); DEMO: http://jsfiddle.net/dpJZE/2/ API selector docs: http://api.jquery.com/category/selectors/ Read top paragraph regarding escaping
Q: I2C IOCTL Write Failure Hey I am trying to write a user space application to move some data to an I2C for an embedded system running PetaLinux, an operating system for embedded Linux, although I do not think that is what is affecting the issue. I am getting a Connection timeout and a segmentation fault. The function has macros that direct it to write to the first I2C bus. I specify the data that I want to write in main and pass it to i2c_write, which then passes it to i2c_ioctl_write. Here is the code: #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <errno.h> #include <string.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <linux/i2c.h> #include <linux/i2c-dev.h> #include <sys/ioctl.h> #define I2C_ADAPTER "/dev/i2c-0" #define I2C_DEVICE 0x00 #define REG_ADDR 0x00 int i2c_ioctl_write (int fd, uint8_t dev, uint8_t regaddr, uint16_t *data) { printf("i2c_ioctl_write\n"); int i, j = 0; int ret; uint8_t *buf; buf = malloc(1 + 2 * (sizeof(data) / sizeof(data[0]))); if (buf == NULL) { return -ENOMEM; } printf("\tBuffer Allocation Successful...\n"); buf[j ++] = regaddr; for (i = 0; i < (sizeof(data) / sizeof(data[0])); i ++) { buf[j ++] = (data[i] & 0xff00) >> 8; buf[j ++] = data[i] & 0xff; } printf("\tBuffer Setup Successful...\n"); struct i2c_msg messages[] = { { .addr = dev, .buf = buf, .len = sizeof(buf) / sizeof(buf[0]), }, }; printf("\tSetup I2C Messages...\n"); struct i2c_rdwr_ioctl_data payload = { .msgs = messages, .nmsgs = sizeof(messages) / sizeof(messages[0]), }; printf("\tSetup I2C IOCTL Payload...\n"); ret = ioctl(fd, I2C_RDWR, &payload); printf("\tWrote with IOCTL...\n"); if (ret < 0) { ret = -errno; } free (buf); return ret; } int i2c_ioctl_smbus_write (int fd, uint8_t dev, uint8_t regaddr, uint16_t *data) { printf("i2c_ioctl_smbus_write\n"); int i, j = 0; int ret; uint8_t *buf; buf = malloc(2 * (sizeof(data) / sizeof(data[0]))); if (buf == NULL) { return -ENOMEM; } for (i = 0; i < (sizeof(data) / sizeof(data[0])); i ++) { buf[j ++] = (data[i] & 0xff00) >> 8; buf[j ++] = data[i] & 0xff; } struct i2c_smbus_ioctl_data payload = { .read_write = I2C_SMBUS_WRITE, .size = I2C_SMBUS_WORD_DATA, .command = regaddr, .data = (void *) buf, }; ret = ioctl (fd, I2C_SLAVE_FORCE, dev); if (ret < 0) { ret = -errno; goto exit; } ret = ioctl (fd, I2C_SMBUS, &payload); if (ret < 0) { ret = -errno; goto exit; } exit: free(buf); return ret; } int i2c_write (int fd, uint8_t dev, uint8_t regaddr, uint16_t *data) { printf("i2x_write\n"); uint64_t funcs; if (ioctl(fd, I2C_FUNCS, &funcs) < 0) { return -errno; } if (funcs & I2C_FUNC_I2C) { return i2c_ioctl_write (fd, dev, regaddr, data); } else if (funcs & I2C_FUNC_SMBUS_WORD_DATA) { return i2c_ioctl_smbus_write (fd, dev, regaddr, data); } else { return -ENOSYS; } } int main (int argc, char *argv[]) { printf("main\n"); uint8_t regaddr; int fd; int ret = 0; uint16_t data[] = {1, 2, 4}; fd = open(I2C_ADAPTER, O_RDWR | O_NONBLOCK); ret = i2c_write(fd, I2C_DEVICE, REG_ADDR, data); close(fd); if (ret) { fprintf (stderr, "%s.\n", strerror(-ret)); } free(data); return ret; } When I run the program on QEMU I get the following output: main i2x_write i2c_ioctl_write Buffer Allocation Successful... Buffer Setup Successful... Setup I2C Messages Setup I2C IOCTL Payload cdns-i2c e0004000.i2c: timeout waiting on completion Wrote with IOCTL Connection timed out. Segmentation fault I assume it is failing on the line ret = ioctl(fd, I2C_RDWR, &payload); but I am not sure why. Was the payload constructed improperly? Update: Here is the current code: #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <inttypes.h> #include <errno.h> #include <string.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <linux/i2c.h> #include <linux/i2c-dev.h> #include <sys/ioctl.h> #define I2C_ADAPTER "/dev/i2c-0" #define I2C_DEVICE 0x00 int main (int argc, char *argv[]) { int fd; int ret = 0; fd = open(I2C_ADAPTER, O_RDWR | O_NONBLOCK); uint64_t funcs; int addr = 0X00; if (ioctl(fd, I2C_SLAVE, addr) < 0) { /* ERROR HANDLING; you can check errno to see what went wrong */ printf("Cannot setup as slave"); exit(1); } if (ioctl(fd, I2C_FUNCS, &funcs) < 0) { printf("ioctl failed"); return -errno; } printf("funcs & I2C_FUNC_I2C: %llu\n", funcs & I2C_FUNC_I2C); printf("funcs & I2C_FUNC_SMBUS_WORD_DATA: %llu\n", funcs & I2C_FUNC_SMBUS_WORD_DATA); __u8 reg = 0x10; __s32 res; if (funcs & I2C_FUNC_I2C) { char buf[10]; printf("Attempting to write to I2C bus via I2C protocol...\n"); buf[0] = reg; buf[1] = 0x43; buf[2] = 0x65; int bytes_written = write(fd, buf, 3); if(bytes_written != 3) { printf("Wrote %d bytes", bytes_written); printf("\tFailed to write to I2C Bus\n"); close(fd); return -1; } else { printf("\tSuccesful write to I2C Bus\n"); } char buf2[10]; printf("Attempting to read from I2C bus via I2C protocol...\n"); if(read(fd, buf2, 1) != 1) { printf("\tFailed to do I2C read from Bus\n"); close(fd); return -1; } else { printf("\tRead successful. Comparing read results from original write buffer..."); printf("\t\tWritten value: %c", buf[0]); printf("\t\tRead value: %c", buf2[0]); } return 0; } else if (funcs & I2C_FUNC_SMBUS_WORD_DATA) { printf("Attempting to write to I2C bus via SMBus protocol...\n"); //res = i2c_smbus_write_word_data(fd, REG_ADDR, 0x6543); res = 1; if(res < 0) { printf("\tFailed to write to I2C Bus\n"); close(fd); return -1; } else { printf("\tSuccesful write to I2C Bus\n"); } //res = i2c_smbus_read_word_data(fd, REG_ADDR); if(res < 0) { printf("\tFailed to read from I2C Bus\n"); close(fd); return -1; } else { printf("\tRead successful. Comparing read results from original write buffer..."); printf("\t\tWritten value: %c", 0x6543); printf("\t\tRead value: %c", res); } } else { printf("Cannot write to I2C"); return -ENOSYS; } close(fd); if (ret) { fprintf (stderr, "%s.\n", strerror(-ret)); } return ret; } I was able to get rid of the seg fault by removing free(), so thanks there. I have pinpointed the exact issue of the timeout which occurs in the Cadence I2C Driver here: https://github.com/Xilinx/linux-xlnx/blob/3f3c7b60919d56119a68813998d3005bca501a40/drivers/i2c/busses/i2c-cadence.c#L825 which is still occurring. As mentioned, there is probably some issue with the way I am writing to slave causing the slave to not send ACK, resulting in a timeout. I am not sure which registers I will need to write what to. I have a feeling the I2C_DEVICE macro and addr and reg variables will need to be changed. A: cdns-i2c e0004000.i2c: timeout waiting on completion It seems that i2c driver (cdns-i2s) doesnt recieves the acknowledgment from the slave. It may occur as you are using I2C-slave address as 0x00 which is a general call address. While using general call address the second byte that is sent has a special purpose which is mentioned in the i2c-specification (section 3.1.13). If you use general call address you need to follow the specification or else Try using the exact i2c slave address instead of general call address(0x00).
Q: How to get Firebase RemoteConfig Parameters in unity I want to get enter image description here this parameter. I'm trying to do this with this code FirebaseRemoteConfig.GetInstance(Firebase.FirebaseApp.DefaultInstance).GetValue("VERSION").ToString() but it's returns nothing. Also i've tryed get it with FirebaseRemoteConfig.DefaultInstance.GetValue but i have same result. using UnityEngine; using UnityEngine.SceneManagement; using Firebase.RemoteConfig; public class GameOpening : MonoBehaviour { Firebase.DependencyStatus dependencyStatus = Firebase.DependencyStatus.UnavailableOther; // Use this for initialization void Start() { Firebase.FirebaseApp.CheckAndFixDependenciesAsync().ContinueWith(task => { dependencyStatus = task.Result; if (dependencyStatus == Firebase.DependencyStatus.Available) { Debug.Log(FirebaseRemoteConfig.DefaultInstance.GetValue("VERSION")); } else { Debug.LogError( "Could not resolve all Firebase dependencies: " + dependencyStatus); } }); } } A: You have to Fetch and Activate FirebaseRemoteConfig before you can start using it. Also you should decide on a type of value you will use, Long for example. The next example should work: using UnityEngine; using UnityEngine.SceneManagement; using Firebase.RemoteConfig; public class GameOpening : MonoBehaviour { Firebase.DependencyStatus dependencyStatus = Firebase.DependencyStatus.UnavailableOther; // Use this for initialization async void Start() { await Firebase.FirebaseApp.CheckAndFixDependenciesAsync().ContinueWith(async task => { var dependencyStatus = task.Result; if (dependencyStatus == Firebase.DependencyStatus.Available) { await FirebaseRemoteConfig.DefaultInstance.FetchAsync(TimeSpan.Zero); await FirebaseRemoteConfig.DefaultInstance.ActivateAsync(); UnityEngine.Debug.Log(FirebaseRemoteConfig.DefaultInstance.GetValue("VERSION").LongValue); } else { UnityEngine.Debug.LogError( "Could not resolve all Firebase dependencies: " + dependencyStatus); } }); }
Q: Negative Pell's Equation: Prove that $k=3$. I made this problem (while solving another problem) but I haven't been able to prove it. Let $x,y,k\in \mathbb{Z}^+$. Prove that if $x^2-(k^2-4)y^2=-1$ then $k=3$. Any pointers are appreciated, but a solution would be great. An interesting observation is that $k^2-4=p$ for a prime $p$ has the unique solution $k=3$ (and $p=5$), so perhaps we can show that $k^2-4$ must be a prime? Then the result will follow. Thanks! A: The simple continued fraction for this is entirely predictable and provable. First note that $k$ cannot be even, as $-1$ is not a square mod 4. Also $k \neq 1 \pmod 4$ because then both $k+2 \equiv 3 \pmod 4$ and $k-2 \equiv 3 \pmod 4.$ Both factors are divisible by a prime $q \equiv 3 \pmod 4,$ hence $-1$ is not a quadratic residue. We are left with $k \equiv 3 \pmod 4.$ Take $$ k = 4n+3 $$ with $$ n \geq 1. $$ Also take $$ a_0 = \lfloor \sqrt {k^2 - 4} \rfloor = k-1. $$ The continued fraction is $$ \langle a_0; 1,2n,2,2n,1,2a_0 \rangle. $$ I should point out that $a_0 + \sqrt {k^2 - 4}$ is a "reduced surd" in the sense meant by the wikipedia selection I link above. The continued fraction for it is $$ \langle 2a_0, 1,2n,2,2n,1, 2a_0, 1,2n,2,2n,1, 2a_0, 1,2n,2,2n,1, \ldots \rangle $$ forever. The primitively represented values of $x^2 - (k^2 - 4) y^2$ given by the convergents of the continued fraction are $$ 1, \; 4, \; -(4n+1), \; -(8n+1) $$ so we do not get $-1$ as soon as $n \geq 1.$ The fixed length of the continued fraction reflects this polynomial identity below: first note, with $k = 4n+3,$ we have $k^2 - 4 = 16n^2 + 24 n + 5.$ The identity is $$\color{blue}{ \left( 32 n^3 + 72 n^2 + 48 n + 9 \right)^2 - \left( 16 n^2 + 24 n + 5 \right) \left(8n^2 + 12n + 4 \right)^2 = 1}. $$ $$ \begin{array}{cccccccccccccccccccccccccccccc} & & 4n+2 & & 1 & & 2n & & 2 & & 2n & & 1 & & 8n+4 \\ \frac{0}{1} & \frac{1}{0} & & \frac{4n+2}{1} & & \frac{4n+3}{1} & & \frac{8n^2+10n+2}{2n+1} & & \frac{16n^2 +24n+7}{4n+3} & & \frac{32n^3 +56n^2+24n+2}{8n^2+8n+1} & & \frac{32n^3+72n^2+48n+9}{8n^2+12n+4} & & \\ \\ & 1 & & -8n-1 & & 4 & & -4n-1 & & 4 & & -8n-1 & & 1 & & \end{array} $$ jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell 45 0 form 1 12 -9 delta -1 1 form -9 6 4 delta 2 2 form 4 10 -5 delta -2 3 form -5 10 4 delta 2 4 form 4 6 -9 delta -1 5 form -9 12 1 delta 12 6 form 1 12 -9 disc 180 Automorph, written on right of Gram matrix: 17 216 24 305 Pell automorph 161 1080 24 161 Pell unit 161^2 - 45 * 24^2 = 1 ========================================= 4 PRIMITIVE 7^2 - 45 * 1^2 = 4 ========================================= 45 3^2 * 5 ======================================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell 117 0 form 1 20 -17 delta -1 1 form -17 14 4 delta 4 2 form 4 18 -9 delta -2 3 form -9 18 4 delta 4 4 form 4 14 -17 delta -1 5 form -17 20 1 delta 20 6 form 1 20 -17 disc 468 Automorph, written on right of Gram matrix: 49 1020 60 1249 Pell automorph 649 7020 60 649 Pell unit 649^2 - 117 * 60^2 = 1 ========================================= 4 PRIMITIVE 11^2 - 117 * 1^2 = 4 ========================================= 117 3^2 * 13 ======================================================== jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell 221 0 form 1 28 -25 delta -1 1 form -25 22 4 delta 6 2 form 4 26 -13 delta -2 3 form -13 26 4 delta 6 4 form 4 22 -25 delta -1 5 form -25 28 1 delta 28 6 form 1 28 -25 disc 884 Automorph, written on right of Gram matrix: 97 2800 112 3233 Pell automorph 1665 24752 112 1665 Pell unit 1665^2 - 221 * 112^2 = 1 ========================================= 4 PRIMITIVE 15^2 - 221 * 1^2 = 4 ========================================= 221 13 * 17 ================================================ jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell 357 0 form 1 36 -33 delta -1 1 form -33 30 4 delta 8 2 form 4 34 -17 delta -2 3 form -17 34 4 delta 8 4 form 4 30 -33 delta -1 5 form -33 36 1 delta 36 6 form 1 36 -33 disc 1428 Automorph, written on right of Gram matrix: 161 5940 180 6641 Pell automorph 3401 64260 180 3401 Pell unit 3401^2 - 357 * 180^2 = 1 ========================================= 4 PRIMITIVE 19^2 - 357 * 1^2 = 4 ========================================= 357 3 * 7 * 17 ============================================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell 525 0 form 1 44 -41 delta -1 1 form -41 38 4 delta 10 2 form 4 42 -21 delta -2 3 form -21 42 4 delta 10 4 form 4 38 -41 delta -1 5 form -41 44 1 delta 44 6 form 1 44 -41 disc 2100 Automorph, written on right of Gram matrix: 241 10824 264 11857 Pell automorph 6049 138600 264 6049 Pell unit 6049^2 - 525 * 264^2 = 1 ========================================= 4 PRIMITIVE 23^2 - 525 * 1^2 = 4 ========================================= 525 3 * 5^2 * 7 =================================================================
Q: Как написать возврат NpgsqlCommand из фукнции с использованием пула соединений? Есть функция типа: NpgsqlConnection GetConnection() { var connection = new NpgsqlConnection(connectionString); connection.Open(); return connection; } Я ее использую в виде: using (var connection = GetConnection()) { using (var q = new NpgsqlCommand {Connection = connection}) { q.CommandText = "SELECT ... FROM ...;"; q.ExecuteReader(); } } Все работает, но хочу сократить код до using (var q = GetCommand("SELECT ... FROM ...;")) { q.ExecuteReader(); } Возникает вопрос, как написать GetCommand. Если тупо возвращать созданную команду, то соединение не закрывается и пул в итоге переполняется. Если в Disposed команды добавить NpgsqlCommand.Connection.Dispose(), то к этому моменту Connection = null и закрывать нечего. Может туплю на ровном месте, но вот как эту GetCommand написать: NpgsqlCommand GetCommand(string commandText) { return new NpgsqlCommand(commandText, GetConnection()); } A: Ну если хранить соответствие команда-соединение, то работает так... public static NpgsqlCommand GetCommand(string commandText = "") { lock (_instance._locker) { var conn = GetConnection(); var cmd = new NpgsqlCommand(commandText, conn); cmd.Disposed += (o, e) => { if (_instance._pool.TryGetValue(o as NpgsqlCommand ?? throw new InvalidOperationException(), out var c)) { c.Close(); } }; _instance._pool[cmd] = conn; return cmd; } }
Q: GrapeCity ActiveReportsJS How to pass Authorization Header for DataSource I'm asking on behalf of our UI dev. He's coding in VueJS, but this should be specific to the ActiveReportsJS report viewer or report designer only since it's still just JavaScript. We're using JWT's with our .net core web API, so we need to pass an Authorization header with each report datasource request. There seems to be no documentation at all on this, and I would believe that most companies would require some type of authorization to access reports so that the endpoints aren't just sitting out there for anyone to consume freely. Any help would be appreciated, thanks in advance for any info. A: The official answer from GrapeCity is that they don't support authorization for datasources in the JS version. I'm not sure what the use for this product is, but it definitely doesn't fit our use-case.
Q: How can I turn a string containing a dollar amount into an integer? How can I turn a string containing a dollar value in the form "$39,900" into an integer so I can perform calculations with it? I figured I would try gsub, but that doesn't appear to work: str = "$39,900" str.strip.gsub('$' '') => #<Enumerator: "$39,900":gsub("$")> Can someone please share with me the proper way to go about this? A: .gsub('$' '') --> .gsub('$', '') (missing a comma) A: str.tr('$,', '').to_i should work I used #tr here. How it works is best explained in documentation A: A regexp version would be: str.gsub(/[^\d]/, '').to_i Where [^\d] stands for "every character that's not a number". A: I'd use: str = "$39,900" str.tr('^0-9', '').to_i # => 39900 Here's how it breaks down: str # => "$39,900" .tr('^0-9', '') # => "39900" .to_i # => 39900 '^0-9', '' in tr means "replace everything that is not 0..9 with '', resulting in only digits. tr is extremely fast, much faster than gsub for this use, and well worth knowing about and using for this sort of problem. If you insist on using gsub with a regex, this will do it: str.gsub(/\D+/, '').to_i # => 39900 however I'd still recommend using tr. Here's why: require 'fruity' str = "$39,900" compare do _gsub { str.gsub(/\D+/, '') } _tr { str.tr('^0-9', '') } jason_yost { str.scan(/\d/).join('') } nikita_mishain { str.tr('$,', '') } ruben_tf { str.gsub(/[^\d]/, '') } end # >> Running each test 8192 times. Test will take about 2 seconds. # >> nikita_mishain is similar to _tr # >> _tr is faster than ruben_tf by 4x ± 1.0 # >> ruben_tf is similar to _gsub # >> _gsub is faster than jason_yost by 2x ± 0.1 A: str = "$39,900" str.scan(/\d/).join('').to_i => 39900 If the string doesn't contain a number this will return 0 str = "test" str.scan(/\d/).join('').to_i => 0 This will not handle decimal properly. If you need to get a float value you can use str = '%.2f' % '$39,000.99'.delete( "$" ).delete(',') str.to_f => 39000.99
Q: Regex For Date validation in Javascript pointing error I am using the following regex for my date to validate in dd.mm.yyyy format. But any date containing 19 is not accepting, such as 19.10.2015, 19.09.2015 var rgexp = /(^(((0[1-9]|[12][0-8])[.](0[1-9]|1[012]))|((29|30|31)[.](0[13578]|1[02]))|((29|30)[.](0[4,6,9]|11)))[.](19|[2-9][0-9])\d\d$)|(^29[.]02[.](19|[2-9][0-9])(00|04|08|12|16|20|24|28|32|36|40|44|48|52|56|60|64|68|72|76|80|84|88|92|96)$)/; I did not find the problem. Can any body help me.
Q: Making charts in ASP.NET with Flot and get the values from code-behind file I'm working on a project were I have input from the user from the ASP.NET page. This input is then being processed into data and with this data I can create charts, this works with the Microsoft standard Charting library's. But now I want to make the same chart but with Flot. I can make a Flot chart but this data is hard-coded in the .ASPX page. This is with jQuery. What I'm trying to do is let the code-behind file fill in the chart. I know I need to use JSON (which I'm not acquainted with), but I don't know how. A: I think what you want to do is to use AJAX do load data dynamically from a server. Here is an example to do as much. http://people.iola.dk/olau/flot/examples/ajax.html Let me know if this is not that for which you are looking.
Q: Can Mercurial merge a named-branch that isn't a head? Basically, I have dev branch, and what I like to do is to create a feature branch while I implement something, and then merge it back. So situations like the following occurs a b c d - dev / e f - feature Since dev isn't a head, is it still possible to bring dev up to feature such that both dev and feature are pointing to f? I'm pretty sure git can do this just fine, but can't seem to convince Mercurial to do the same... A: Named branches in hg (unlike in git) don't "point" anywhere. Branch names aren't movable aliases for a particular rev. Each commit has a metadata marker naming the branch that commit is on; that's all. In this situation, if you have no separate commits descending from "d" on the dev branch, then all you need to do is run "hg branch dev" and then your next commit, descended from "f", will be back on branch dev. Which I think will achieve the results you're looking for. EDIT: That will work, but Steve Losh's suggestion of doing an actual merge will result in a more sensible history. A: Carl Meyer is right. You're thinking as a git user, and Mercurial handles things differently. You could do what Carl suggested and just force the next commit to be on the dev branch. I'd personally find this rather confusing if I saw it though, since there would be a discontinuity in the dev branch. The way I'd handle it is to merge the feature branch back in: hg update dev && hg merge feature && hg commit -m 'Merge in the completed feature.' This would result in a graph like: a - dev b - dev c - dev d - dev /| e | - feature f | - feature \| g - dev For me, this clearly illustrates exactly what happened. You branched off for a new feature and merged it into the dev branch when finished. The fact that there were no other commits on dev in the meantime is just a coincidence and doesn't have to change the workflow.
Q: Best way to export and import products I am using Magento 1.9.2.4 and want to know what is the best way to export and import products from one Magento installation to another. I read a lot of things but none of them worked for me. My current Magento store has multiple websites, configurable products etc. When I try the simple export (System > Import/Export > Export) it does not work because the new store (where I am importing the products) does not have multiple websites. When I try to use the profile export (System > Import/Export > Dataflow - Profile) it does not work because the configurable products can't find the attributes set on the new store. Is there a good way to "transfer" products from one installation to another?
Q: HTML—putting (Apple Pages or MS Word) class tags on text I want to generate html text that, when copy/pasted to a WYSIWYG editor like Word or Pages, has the text belonging to different style groups (like a css class), so that in the editor a user could e.g. change the font color for a particular group of text. Is this possible? Edit Just to clarify, I'm wondering if there is some way to tag/add attributes to the HTML so that when it is transferred to a word processor, all text of that group can be restyled at once within the processor. I don't need any changes in the processor to be carried back to HTML, this is a one-way flow of data. Edit 2 I mean is there something to add to HTML text, that word processors (or just one in particular) can recognise as a "class" of sorts, so that you can then make bulk changes to all text of that group? Below is a screenshot of Mac Pages' "Style" groups for reference—is there someway I could copy HTML and paste it in this Pages document, so that Pages would know I'm pasting in e.g. "Heading Red" text? A: You can edit the actual HTML text in a word processor such as MS Word or Pages, but changing the font color, text size, etc through the word processor's functions as you would while writing a document will not have any effect when the HTML is viewed in a browser.
Q: How to check if value is an object in Handlebars? jsBin Example Here is my little model: var stuff = [{ there: 'blah', that: { one: 'bbb', two: 'ccc' } }]; First, for the following template, I don't understand why the first {{@key}} doesn't output anything and the second one does. {{#each this}} {{@key}} {{#each that}} {{@key}} {{/each}} {{/each}} And more importantly I am trying to use this next template and a helper to check if a value is an object or a string and either iterate over it and print the keys, or just print out the key. {{#each this}} {{#if isObj this}} {{#each that}} {{@key}} {{/each}} {{else}} {{@key}} {{/if}} {{/each}} Helper: Handlebars.registerHelper('isObj', function(thing) { return $.type(thing) === 'object'; }); A: The first one I can answer, you should use {{@index}} instead of {{@key}} because your iterating an array. I'm looking into the second one. A: {{#each this}} key: {{@index}} {{#each that}} key1: {{@key}} {{/each}} {{/each}} For part b it seems you're going to have to register a new helper function as if cant take the return from another funciton. You're block helper will be something like (pretty much stole this from here): Handlebars.registerHelper('ifObject', function(item, options) { if(typeof item === "object") { return options.fn(this); } else { return options.inverse(this); } }); Now change your template to something like: {{#each this}} {{#ifObject this}} {{#each that}} {{@key}} {{/each}} {{else}} {{@key}} {{/ifObject}} {{/each}} This was working on tryhandlebars.com and updated your jsbin, hope it helps!
Q: xml2js not parsing xml retrieved through proxy I am working on a nodejs app and using xml2js to parse xml files. When the xml file is local, I have no problems parsing it with xml2js, however, I need to retrieve remote xml files, and I need to connect through a proxy. This code (reading a local xml file) works: var rundownParser = new xml2js.Parser(); function parseRundown(){ fs.readFile(rundownFolder + '/' + rundownFile, function(err, data) { rundownParser.parseString(data); }); } rundownParser.on('end', function(result) { console.log("PARSER ENDED") }); This is my code that retrieves the remote XML file through the proxy: var rundownParser = new xml2js.Parser(); function parseRundown(){ var options = { host: proxyHost, port: proxyPort, path: mosGatewayPath, method: 'GET', headers : { host: mosGatewayHost } }; var req = http.request(options, function(res) { res.on('data', function (response) { console.log("REMOTE DATA: "+response) rundownParser.parseString(response); }); }); req.end(); } rundownParser.on('end', function(result) { console.log("PARSER ENDED") }); In the code that retrieves the remote file, I see the correct response in the 'data' event. So I know that the connection is working, but after that it just hangs, it never reaches the parsers 'end' event. No errors are thrown. I would really appreciate any help in pointing out what I am doing incorrect. TIA! A: Try calling parseString after response emits 'end'. e.g. var req = http.request(options, function(res) { var body; res.on('data', function (data) { body += data }); res.on('end', function() { console.log("REMOTE DATA: "+body) rundownParser.parseString(body); }); });
Q: Changing data organization on disk in MySQL We have a data set that is fairly static in a MySQL database, but the read times are terrible (even with indexes on the columns being queried). The theory is that since rows are stored randomly (or sometimes in order of insertion), the disk head has to scan around to find different rows, even if it knows where they are due to the index, instead of just reading them sequentially. Is it possible to change the order data is stored in on disk so that it can be read sequentially? Unfortunately, we can't add a ton more RAM at the moment to have all the queries cached. If it's possible to change the order, can we define an order within an order? As in, sort by a certain column, then sort by another column if the first column is equal. Could this have something to do with the indices? Additional details: non-relational single-table database with 16 million rows, 1 GB of data total, 512 mb RAM, MariaDB 5.5.30 on Ubuntu 12.04 with a standard hard drive. Also this is a virtualized machine using OpenVZ, 2 dedicated core E5-2620 2Ghz CPU Create syntax: CREATE TABLE `Events` ( `id` int(11) NOT NULL AUTO_INCREMENT, `provider` varchar(10) DEFAULT NULL, `location` varchar(5) DEFAULT NULL, `start_time` datetime DEFAULT NULL, `end_time` datetime DEFAULT NULL, `cost` int(11) DEFAULT NULL, PRIMARY KEY (`id`), KEY `provider` (`provider`), KEY `location` (`location`), KEY `start_time` (`start_time`), KEY `end_time` (`end_time`), KEY `cost` (`cost`) ) ENGINE=InnoDB AUTO_INCREMENT=16321002 DEFAULT CHARSET=utf8; Select statement that takes a long time: SELECT * FROM `Events` WHERE `Events`.start_time >= '2013-05-03 23:00:00' AND `Events`.start_time <= '2013-06-04 22:00:00' AND `FlightRoutes`.location = 'Chicago' Explain select: 1 SIMPLE Events ref location,start_time location 18 const 3684 Using index condition; Using where A: MySQL can only select one index upon which to filter (which makes sense, because having restricted the results using an index it cannot then determine how such restriction has affected other indices). Therefore, it tracks the cardinality of each index and chooses the one that is likely to be the most selective (i.e. has the highest cardinality): in this case, it has chosen the location index, but that will typically leave 3,684 records that must be fetched and then filtered Using where to find those that match the desired range of start_time. You should try creating a composite index over (location, start_time): ALTER TABLE Events ADD INDEX (location, start_time)
Q: How to uninstall Webforms for marketers How to uninstall Webforms for marketers module so that I can remove all references to this module? (for reference: WFFM -2.2.0 rev. 111104) Any help appreciated. A: If you're still looking for a solution, first, inspect the content of the WFFM package then based on what files and items you see, delete the files and remove the respective items in Sitecore. A: Unfortunately there is no real interface or function for uninstalling Sitecore modules (yet anyway). That said it doesn't mean that it is impossible. If you open the package file (.zip) and browse it you will find that the structure contains both physical files installed in the file system together with a directory of items serialized to files. By traversing the structure in the package you can find out which items and files to remove from your solution. Good luck! A: Open WFFM package (.zip) file and remove installed WFFM files as per package form Sitecore.
Q: Why is my tag closing early for the section tag? I am getting an error and says the 'section' is closed by another tag. I checked my HTML and everything looks good. <ng-container *ngIf="display$ | async as display"> <section [class.open]="display === 'open'" (click)="close()"> <div (click)="$event.stopPropagation()"> <button class="close" type="button" (click)="close()">X</button> <div class="dateAlignment"> <div class="dateComponent"> <div class="date-selection"> <div class="month"> <input type="text" [(ngModel)]="selectedMonth" list="correctionMonth" placeholder="Month...." > <datalist id="correctionMonth"> <option value="{{item.Name}}" [selected]="item.isSelected" *ngFor="let item of selectedMonthArray">{{item.id}}</option> </datalist> </div> <div class="year"> <input type="text" [(ngModel)]="selectedYear" list="correctionYear" placeholder="Year...." > <datalist id="correctionYear"> <option value="{{item}}" *ngFor="let item of yearArray">{{item}}</option> </datalist> </div> </div> <div class="buttongroup"> <div class="submit"> <div class="btn-toolbar" role="toolbar" aria-label="Toolbar with button groups"> <div class="btn-group btn-group-lg" role="group" aria-label="First group"> <button type="button" class="btn btn-secondary reportButton">Correction Notices</button> <button type="button" class="btn btn-secondary reportButton">Display Tablular Data</button> <button type="button" class="btn btn-secondary reportButton">Mailing Labels</button> </div> </div> </div> </div> <div class="submitgroup"> <div class="reportApplication"> <button type="button" class="btn btn-success">Apply</button> <button type="button" class="btn btn-danger">Reset</button> </div> </div> </div> </div> </section> </ng-container> A: One of your div tags is unclosed. Maybe you meant to enclose the <button> tag in a div? <section [class.open]="display === 'open'" (click)="close()"> <div (click)="$event.stopPropagation()"> <button class="close" type="button" (click)="close()">X</button> </div> A: on the line 9, where you have the input. the input is a self-closing tag. it should be closed as below: <input type="text" [(ngModel)]="selectedYear" list="correctionYear" placeholder="Year...." /> this is a self closing tag. you missed the / let me know if it works
Q: How do I write and fix vector in c++ I am making a question list that will store a list of questions. I am wondering if anyone could have me debug the problem. QuestionList.h #ifndef QUESTIONLIST_H #define QUESTIONLIST_H #include "Questions.h" #include <iostream> #include <vector> using namespace std; class QuestionList { private: vector<Questions*> _questionList; public: QuestionList(); QuestionList(const QuestionList& orig); virtual ~QuestionList(); void AddQuestion(Questions *question); void RemoveQuestion(int i); }; QuestionList.cpp #include "QuestionList.h" QuestionList::QuestionList() { } QuestionList::QuestionList(const QuestionList& orig) { } QuestionList::~QuestionList() { } QuestionList::QuestionList(){ _questionList = vector<Questions*>; } QuestionList::AddQuestion(Questions *question){ _questionList.push_back(*question); } QuestionList::RemoveQuestion(int i){ _questionList.erase(_questionList.begin() + i); } Any guidance would be helpful.
Q: How much time should you spend planning a commit before writing code? At the moment I'm spending more time planning out a commit than actually writing code when adding a new feature. Less than two hours would be lucky, and sometimes I'd spend a good part of the day without writing any code. This is making me unhappy, since I don't feel I'm productive enough (I'm living with my parents, and have never been employed as a programmer). If I don't do this amount of planning, I just end up writing code that will have to be undone before I commit, and this just messes up my project, because I don't like wasting any code I've already written and try to recycle it as much as possible (my precious). Someone said that programming isn't about how fast you can type; it's about how fast you can think. I'm not very good at thinking fast. I think I'm overly cautious making my productivity not economically viable, but even still it's far too easy for me to waste a whole lot of time making a mess of my codebase. Travis asked Can you explain what you mean by "planning out a commit"? I guess there's the time spent architecting, i.e., planning out the object hierarchy, which thread will do the work, GPU or CPU, planning polymorphism for m to n relationships, and which asynchronous pattern should I use. Then there are implementation details and parameter choice for scientific computations. I like to think about how I could iterate, so if I get a bad result it is rectifiable. Breaking down a feature into a series of behaviours, which you can verify correctness at each step. I suppose I think about how to verify correctness of intermediate steps a lot before I've even started. Why does the code you write have to be undone before you commit? Well, it's easy to write code that's unmaintainable, and difficult to debug. So I have to backtrack and write something more structured. I also sometimes overlook some detail that makes my first attempt not viable. "Planning commits" is just what I came up with to communicate when you've finished one feature and are moving on to the next (obviously committing your changes first). You've got no Git changes and haven't yet written any code committing you down one path. There's one big commit that gets the scaffold in place and need lots of planning, and followed by a couple of smaller ones that don't need any planning. So maybe the commits in question are more like new branches. (It's just that I don't use Git branches.) A: If I don't do this amount of planning, I just end up writing code that will have to be undone before I commit, and this just messes up my project, because don't like wasting any code I've already written and try to recycle it as much as possible (my precious). This is your problem. There are two reasons to write code: * *Writing the final version of the code that solves your problem. *Writing code to explore the space and discover the most elegant way to solve the problem. You can try to do the second in your head or on paper, but it's a lot easier to get a feel for where the trouble spots are if you actually write code. In doing so, you'll often realize that some parts are way too complicated the way you tried it, but that if you just did this and that, then you could rewrite this big ugly section this way and cut the code size by 50%. Every problem already has a most elegant solution; you'll have an easier time discovering it once you're actually writing code. (Probably. Everyone's got their own style, and I can only speak for what works for me.) If you just start writing code without planning it out first, you'll write a bad solution and need to throw it away and rewrite. If you spend hours planning before writing any code, you'll also write a bad solution and need to throw it away and rewrite, but you'll be less willing to do so than you would have been if you just hacked out a quick prototype. Or as Fred Brooks succinctly put it, "plan to throw one away; you will, anyhow." (I'm assuming here that you're actually solving a novel problem; if you're just throwing together GUI code for MS Word v512 and you've got a specification and a deadline, it's probably not necessary to prototype. But that sort of coding's boring and won't teach you anything; you'll learn more (and have more fun) by working on hard problems than you will by writing boring boilerplate.) That said, recycling code isn't a bad idea. But the way to do it isn't to refuse to throw anything out. Rather, if you're writing a new program and some parts are similar to something you wrote before, take a look at that code and see if there's anything that you can modify to fit the new problem. And if you end up using the same code more than 2-3 times, look at which parts are always the same and which parts vary from project to project, and write a few library functions that'll let you reuse the code directly in the future. A: You're discovering the difference between "programming" and "development". Developing an application is so much more than churning out some code. All the different little units of code that come together to solve the problem need to work closely together and that takes planning! As you mentioned, if you don't do this planning, then you often end up needing to destroy code you wrote as it wasn't appropriate. Even with planning, this can happen and it's pretty normal. I actually recommend committing more often. As others have mentioned, version control is there to provide you a history of changes. Obviously you don't want to release unfinished code, so this is why we have branches in version control - you can have a development branch (or many of them even!) where you commit code you're still working on, and then you only commit the changes to the main branch when you're satisfied that it's all working. It's a huge advantage to be able to see the code that you've since deleted, in the context of where you wrote it, by going back through your commits... you never know when you might need that code that you ended up not using, for some future project. It seems to me that you would find the process flow for Test Driven Development (TDD) quite convenient. People apply TDD to their work in varying degrees, but the general idea is that before writing the actual code that solves the problem, you write the tests for that code, so that you know what needs to be fed in to a method, and what needs to come out of it. It ends up providing you enormous clarity of what needs to be done. The great thing about tests is that you write them in code too, which helps scratch that "but I need to do some real work" itch. When done effectively, TDD, as well as Unit Testing in general, makes for a more reliable development process. It also aids collaboration with other developers greatly as it serves as a type of documentation of the intention of your code. You (and others) can refactor code with confidence, because as long as the tests pass, the new code is doing the right thing and the rest of the application is going to get along with it. Of course, this is providing that the tests cover all cases - if you find bugs that slip through, then it's time to add some more tests! This is normal too. My final piece of advice: when researching anything like version control branching, TDD, unit testing, etc., don't get too hung up on doing it "perfectly". Do what works for you. I find TDD very powerful and effective, so I do it - but I still don't do every single thing that the books and sites say to do when doing TDD. I do what works for me, with the goal being to deliver effective, high-quality code that can be understood and maintained by others (including future-me who might still be looking after the same code years down the track!). P.S. some bonus (potentially controversial) advice: Try to prefer written documentation, tutorials, and discussion over their audio-visual counterparts. It's a lot easier to find the information you're looking for faster by scrolling than by scrubbing through a video, you can paste excerpts into your own notes, and, in my experience, there's a lot more content available. A: Firstly: when coding for a living, especially as a junior in a team, typically not much design work is needed. This is because you'll be working in an existing code base. Chances are, you'll often be working on a feature that is similar to existing features, so you can look at those as an example. This is nowhere near as boring as it may sound; you'll be learning plenty of things, still need to understand the examples and adapt them to suit your needs. Designing something new can indeed be much harder, but it is also much rarer, especially when you're a junior developer. That is to say: in a typical junior-level development job, I don't think you'll run into the things you're worried about. Or at least to a far smaller extent. If I don't do this amount of planning, I just end up writing code that will have to be undone I would argue that for most developers, this is fairly normal, if they are developing something that is novel (to them and their existing code base). I do that planning sometimes, and when I start writing the code, I often realise that the plan translates to an awkward implementation that feels forced or over-engineered. ... which is good to know! Coding provides feedback on the plan. Planning and coding are very much iterative processes. You plan a bit, try to create some code with the planning in mind, which makes you realise that you overlooked something during planning, so you adjust the plan, code some more, rinse, repeat. I like to think that this is also how artists work. It's a creative journey. Often messy, sometimes boring, sometimes exhilarating, sometimes frustrating. Sometimes you end up with something boring that just works, sometimes with something beautiful that doesn't work. And, every now and then, with something that works and is beautifully elegant. [I] don't like wasting any code I've already written and try to recycle it as much as possible (my precious). Throwing code away is fine! The code has already served a useful purpose: it provided feedback on the planning. It helped you iterate. Someone said that programming isn't about how fast you can type, it's about how fast you can think. I'm not very good at thinking fast. Neither are most people, especially if they have to do the thinking without seeing any code. Coding helps to make things concrete. It may also obfuscate the bigger picture, so zooming in (code) and out (planning) is part of the iterative process. It's also worth noting that people, after years of professional experience, develop a kind of muscle memory for specific approaches, and an instinct for applying these. Which is half the battle when making something new. You cannot be expected to have that already (nor should you expect it from yourself). Put differently, the examples in existing code bases that I mentioned at the top of this answer are in their heads now, and they can apply them in new projects as well. I think I'm overly cautious making my productivity not economically viable, but even still its far too easy for me to waste a whole lot of time making a mess of my codebase. I think you're being too hard on yourself (I can relate). Try to change your mindset to allow the iterative process, to embrace the creative journey. And code that just works (and is readable) is good enough. If later on, new requirements or new features means that the code is no longer good enough, you can adjust it. That adjustability is the reason the world moved away from specialised hardware and embraced software (running on generic hardware). Edit: Also, small steps are good, and focussing on making it work first, and only then making it right is also good. If you already have working code, the planning gets easier because there are fewer unknowns an hypotheticals. Relevant quote is relevant: "First make it work, then make it right…" — Kent Beck …in the smallest steps you can manage. — Uncle Bob Martin (source) As always, everything is more nuanced in practice, and I don't completely let go of design when focussing on making it work. A: I did a test. I wrote an average-sized line of code with a timer running. It took about 7 seconds. Let's do some math. If my working time is 37.5 hours per week, 47 non-vacation weeks per year, that would be about 0.9 million lines per year. Which is 75 000 lines per month. The best I have ever reached writing trivial code is 20 000 lines per month (and that was a non-vacation month so the calculation isn't entirely fair). So even then about 27% of the time I'm writing code and about 73% of the time I'm thinking. However, usually I'm not writing trivial code. Most of the time, I'm doing work that requires careful planning. So I'd say my usual productivity is about at most 5000 lines per month. I wouldn't say exactly I'm planning what to commit. I'm planning what to write. The commit follows from what I have written, sometimes by committing all, sometimes by breaking it into logical-sized chunks, and even sometimes throwing something away. A: I would like to change your perspective for a moment. Commits are not something you plan. Commits, especially in the early stages of figuring out a problem, are little more than save-points along a journey. Instead, spend time thinking about the problem. Break big problems into little problems, and then break those down into smaller problems. Keep decomposing and digesting problems until your mind can see a single line of code. Write that line of code. Keep writing code as you think. Mess up. Change the code. Change it 15 more times, if need be. Stop and commit when you feel like losing that code would be a big setback, or during a natural break in your rhythm of thought and writing code. There is no general rule or guideline. When to commit is more a feeling than some methodological practice. Basically, if you think, "it would really suck to lose this code while I figure things out," then commit what you have. Just be sure to work in your own branch so no one else has to deal with whatever state your code is in. Don't worry about whether the code works, compiles, or looks pretty. Many version control systems allow you to combine many messy commits into one clean, cohesive changeset. Version control is a tool just as much as a text editor. Version control just happens to be really good at backing up your work. So, to directly answer your question, spend zero seconds of your life planning a commit. Write code until you feel like you have something to lose, then commit those changes so you don't. A: You mention live stream YouTubers as a standard to live up to, as if they would make up stuff on the spot, type it in and are done. That is not how it goes. They planned and practiced too beforehand or they show you things they have done hundreds of times before, that are in their routine. When you see a musician play for one minute on YouTube it is safe to assume they spent days practicing, recorded tens of takes and ultimately picked the best one. And that is not counting the years they put in to get up to their level in the first place. Writing software is a skill like any other that takes time to develop. And different people specialize in different things. And, in your particular specialty, there will likely always be some people who can do things quicker. Deciding when and where to optimize is another thing. It seems you tend to be obsessively working to satisfy your own compulsive need to eradicate any imperfections. This is fine in the environment you described and how many great creators started their career. Just enjoy it and learn from it while no one is asking you when it's done yet. A: I like think of this "planning" as "discovering the solution". The time spent discovering a solution depends on how much you already know about the problem, how much you know about the tools at your disposal, how easy the problem constraints are to satisfy, how complex the problem is. All these things can vary greatly. Sometimes, it takes mere seconds before inspiration strikes, sometimes it takes weeks. That said, I do have some inputs for you: * *The process of discovery is often iterative For any non-trivial problems, discovering the solution consists of several steps, with later steps building on earlier ones. The faster you can iterate, and the more information you glean in each step, the faster you'll be done. *The process of discovery may involve writing code Sometimes, writing code yields information more easily than abstract thinking. For instance, if there is uncertainty about the requirements, it may be more illuminating to get a quick UI prototype into the hands of actual users, than holding lenghty meetings discussing UML diagrams ;-) *code isn't written perfect, it evolves towards perfection As we iterate towards a solution, our understanding of the code we need changes. To reflect our improved understanding in code, we can delete old code, write new code -- or we can edit our existing code! Provided we have written our code in a way that is reasonably easy to change, that's often the most efficient way. To conclude, software development is about discovering solutions, and efficient software development about discovering solutions efficiently. It's not "writing code" that takes so long, it's figuring out which code needs writing. (... and knowing that, experienced software developers are quite willing to write experimental code if that speeds up their thinking process) A: At the moment I'm spending more time planning out a commit than actually writing code No one cares how you spent your time. They care about what you made (if you're lucky). This is making me unhappy, since I don't feel I'm productive enough Make something people use. Watch them use it. Learn what they really need. If I don't do this amount of planning, I just end up writing code that will have to be undone before I commit, and this just messes up my project, because don't like wasting any code I've already written and try to recycle it as much as possible (my precious). If that works for you, fine. I plan myself. But I also keep a junk folder I dump code in that I'm never even going to check in and will most likely never look at again. It's ok to make a mess if you clean it up. I also find myself planning and debugging when I'm taking a shower. Bathroom tiles can help you visualize arrays. Someone said that programming isn't about how fast you can type, it's about how fast you can think. I'm not very good at thinking fast. It's not about how fast you think. It's about what you think. Give yourself time to do it well. I think I'm overly cautious making my productivity not economically viable, but even still its far too easy for me to waste a whole lot of time making a mess of my codebase. I hear a lot of worrying about what you yourself think. You need some external validation. Put your stuff in front of other people and learn what they care about. How much time should you spend planning a commit before writing code? Planning is like making popcorn. While the ideas are popping, great. When the learning stops, knock it off and get back to work. Context switching from planning to coding is hard. The point of both is to communicate your intent. The earlier you do that the cheaper your bugs are to fix. However, guard against spending to much time in either. Respectively that's called analysis paralysis and gold plating. Immediately I can make sure I have planned out the night before what I will program the next morning. But I guess because I have nearly finished what I set out to build, It's nice when that happens but often you find problems only once some code exists. Code you have to be willing to scrap or it will drag you down. Bad code can be a good teacher if you let it. I think of it as a sign post pointing me to a better solution. I am thinking about getting a job, and my actual output is going to start mattering Don't fixate on productivity. Communicate. Or you'll spend your time working very hard on something no one cares about. A: This is not really an answer but a dairy of my thoughts on the topic Edit 2: I think the answer to the question is you shouldn't work it all out in your mind before writing code. Rather you should work out the pieces on the fly (which requires a leap faith, faith which you need to train up). Rather you should start by coding up pieces that are invariant to your problem (structure and enumeration definitions) first, leaving the pieces that need to fit together with other code (and thus change) to be written up last. I will claim that orthogonality/separation of concerns is the most important quality of code. I suppose my current approach is to propose a solution (most of the time only in my head). Compare against all my future planned requirements, which reveals lack of orthogonality, and then repeat with a rectified solution. Perhaps this unavoidable for me, and the answer to the question is I do need to plan excessively. I suppose what makes it hard is that I'm still in the stage of (re)inventing new patterns, rather than using patterns I'm already familiar with. I've started watching Tsoding streams again and find the process of following his thoughts quite therapeutic. It seems as it software development requires some kind of meditative mental state, not required when you are just programming. Edit: I suppose a lot of the planning is a defensive maneuver against yak shaving, the root of all evil. I suppose I have developed obsessive compulsive tendencies about this. Better to think of a way to avoid writing low-level imperative code wherever you can. Also I think solo-development is highly questionable. It seems to me that software development works by specialization. Having people specialize so that their work is more and less the same, less variation in their work, less working on unfamiliar things. I've spent a year stuck solving my machine learning problem (although I've learned a lot) which I'm now finally deploying, but it took its toll mentally, exasperated by personal issues. I guess this has made me overly cautious in the face of uncertainty, because there's a lot on the line if I can't make it work. Some work on my mental state might help. On the other hand, live streamers like Tsoding and Andreas Kling demonstrate an amazing mental strength working with high uncertainty and only a small amount surface area of information. It's amazing how little information they need to produce decidability. Also their context switching is far better than mine. I suppose these are senior developer traits that have taken a while to develop. Also, I believe in IQ, but I come from a mostly lower middle class background, so perhaps I'm not cut out for the more white collar kind of programming I am doing at the moment. Blue collar programming might be more appropriate for me. Also I think there is a lot of benefit of working for a corporation having them hold a gun to your head for your development. I've never had anyone look to see if I've done enough work. I'm not accountable at all. My planning mostly involves doodling on a piece of paper, even though I never read any of it. Perhaps my planning should be limited to navigating the code base, visiting places where insertions should be made and writing comments. This is the first month back to software development and maybe I'm still rusty. When solving a machine learning problem it's a lot quicker to think about what won't work out than to actually train a neural network, because curating a suitable data set is so time-consuming. But programming is different, because you are much more likely to get it right first time, making planning less productive. So I need to recalibrate my patterns. Edit 3: You can always write hacks that get a feature working. This results in bad code. A lot of the time you will be removing code, so its best to not spend the time making it good code until you are certain that the feature will stay. However, good software developers have obsessive compulsive impulses to make the code good when they know it will stay. Inexperienced software developers take more time to make the code "good", so it is no small task to just fix it up when you know a feature will stay. What make good code? I'd probably say 1-1 correspondences atm. Edit 4: I think cognitive division of labour into low stress work and high stress work is important. Certain things like solving compiler errors, and adding features that are decoupled, low surface area, with few moving parts (i.e. text editing work); you know are going to be easy, and you can complete them without running into much trouble. With other things, you can dig yourself into a hole. You cant understand the complexity before you start, you may spend far too long removing bugs from your fist implementation, you solution may not be robust enough or it's performance scalable. These are the "weeks of work can avoid hours of planning" tasks. It's important to enjoy the low stress work, while forming better expectations for the complex work to avoid frustration. Difficult work will take time, and you need to accept that and work slowly through it. But slow work is risky, since you could end up wasting you time, and thus demands more planning. A: It's difficult to anticipate things when you don't have lots of experience, because you don't yet have the experience to know what things you need to anticipate. There is a story about a pottery class where half the students were taken to one side, given pens, paper, as much clay as they needed and a wheel etc. and told they had two weeks to plan the best possible clay vase they could make. The other half the class were also given as much clay as they needed and a wheel etc. and told they had two weeks to make as many varied clay vases as they could make and the quality was secondary but they were free to learn and practice as many techniques as they could over the next fortnight. At the end of the two weeks the two halves of the class were brought back together and given a competition in which every student would make a vase and all the vases would then be independently judged. According to the story (which is probably apocryphal, but you take the point) the students from the half of the class which had made lots and lots of vases (although the quality was secondary) ended up making better vases than the students from the half of the class who had been encouraged to spend a long time thinking and planning out the best clay vases they could come up with. The point is that - in this story, at least - the real experience came from engaging in the activity repeatedly, not from the thinking through and planning out the activity. With real experience came real cognitive shortcuts and the ability to anticipate scenarios which really occur when engaging in the activity. A: I'm really into just diving in and coding. In fairness I write a lot of crap and constantly embarrass myself looking at my spur-of-the-moment code sketches in hindsight, but I don't write it into our main codebase. I prototype left and right [*]. My philosophy is to try everything that pops into our heads but in a cheap way, away from big codebases, and into small little test projects just like an artist who sketches things every single day in a sketchpad to improve his skills. I'm not saying that's "right" but it's mine. * *[*] One of the first things I did after installing VS 2019 was learning how to make a prototyping project template that included everything I needed from a central library since I'm prototyping left and right. I also work in a field that demands constant innovation: computer graphics. My best solutions last year that beat everything widely out there become obsolete as someone beats me the next. And I have to beat them again, and I find with this type of field that demands innovation and constantly inventing new solutions to existing problems, you have to take the risk to try things even way outside of the box and also the discipline to throw them away when they turn out to be horrible (which they will be most of the time when you're trying things no one else has tried before or things no else accepted before unless you're some kind of super wizard in which case... please teach me!). But I'm also a visual arts major besides CS major and I really think programming is almost as much artistic as visual arts, and not so much less scientific than visual arts. And a field that is artistic means it's one where the best solution is the one for which you manage the best results even if your methodology is way different from what's considered orthodox or best, and I subscribe very much to this idea. So when people on here ask me questions like, "What's the best algorithm to sort these elements?" For me, it depends on the programmer and what they find most comfortable and intuitive. It seems almost impossibly unlikely but if they write a bubble sort for 32-bit signed integers that beats my fastest multithreaded SIMD radix sort implementation, then their bubble sort is the best solution since it produces the best results. And if you're really after best results in a field like this where even an assembly expert can't perfectly predict the dynamic nature of the hardware these days with CPU caches and brand prediction and whatnot, it's counter-intuitive to spend too much time mulling over an idea and getting too attached to it as most of us naturally would without diving in and sketching out the code and testing its robustness, measuring its efficiency with benchmarks and profiler, and having the discipline to toss it away and try something else if it sucks... which is way easier if you just dove in and started coding it sooner than later. I have many of my colleagues often asking me questions like, "How did you come up with this?" when I come up with an algorithm or data structure that is sometimes as much as 100x faster than our previous solution taking 1/10th of the memory and with even less and more straightforward code than the previous. And I think disappoint them every time with my answer: "I just tried a whole bunch of stuff and measured them all and picked the best solution." All these people are so much smarter than me but they get married to concepts in their minds which don't pass the real-world tests. And I might be too extreme on the opposite side but I never marry myself to anything until my profiler and benchmarks are showing something extraordinary, and I continue with it, and get jaw-dropping results. Anything short of that and I'm ready to delete my solution in my private version control branch and try something else. So for your question, and especially your personality, I'd suggest taking some of mine. Just dive in and start coding more, but don't do it in a way where it's expensive to throw away your solutions (ex: don't write the code in your product's codebase, do it in your "personal sketchbook"). A: Tsoding gave a good answer here. You see how I develop the code. I don't write the correct code right away, character by character. I approach the entire code top down, I sketch the idea; the structure of the code; and I slowly iterate on the details. I know that I need to handle these tree of cases. I'll outline the tree of cases, and only then start to work on the specific branches (in the context of programming) You don't really write the code character by character. You're building up the structure top down. I like to compare programming to drawing, and for that I get so much sh*t because it sounds extremely pretentious. People think I'm being like "the programming is like art. It's like making a painting". I'm saying it's like painting or drawing technically. I'm not talking about artistic expression type of programming. I don't care about artistic expression, it's just a subjective thing in the context of programming. I'm saying it's like painting or drawing technically. How do you paint or draw technically? You make a sketch; you outline the general idea. Then you go down more and more detailed. You draw top down. You never draw perfectly as you envision it right away, because you cant do that. Once you've started top down, you've gathered more information about what you want. You can see what works, and what doesn't work. And you can make certain decisions. I think the problems start when you actually have to get the code running before you can see what works, and what doesn't. In such cases, it make be better "run the simulation in you head" (i.e. "planning") depending on how time consuming the implementation is, and how much of a mess it will make of your code base. But someone also disagreed in the chat: In my experience, sometimes it's easier to go from top down when solving some programing problem and sometimes it's more apt starting from individual small components and building more complex stuff. A: A little story for you. Sir Peter Swinnerton-Dyer, an eminent mathematician, computer scientist, and public administrator, once volunteered to write the bootstrap operating system for a new computer. He said he thought he could do it in six weeks. He spent the next five weeks walking and resting in the countryside, and his colleagues were getting increasingly anxious. In the sixth week he announced that he'd cracked it; all that remained was to write it down; so he started writing it down, and (the story goes) it worked the first time. The best programmers have it all sorted out in their head before they write a line of code.
Q: Error using tmap on large sf object in R I'm trying to use Resolve ecoregion data to map the biomes of the African continent. The shapefile is global, and there are over 800 polygons, as each polygon represents a unique ecological area (whereas there are only seven terrestrial biomes, biomes are the larger spatial unit and are composed of similar ecological areas). Perhaps because of the data size or number of polygons, I am having a difficult time producing a map using tmap. I first cropped the shapefile to the African continent: resolve = st_read("Data/Ecoregions2017/Ecoregions2017.shp") Africa = st_read("Data/Africa_SHP/Africa/Africa.shp") st_crs(Africa) = crs(resolve) resolveAfrica =st_intersection(st_make_valid(resolve), st_make_valid(Africa)) But when I try to then map this cropped shapefile: tm_shape(Africa) + tm_borders() + tm_shape(resolveAfrica) + tm_fill("BIOME_NAME") R returns an error: Error in vapply(lst, class, rep(NA_character_, 3)) : values must be length 3, but FUN(X[[56]]) result is length 2 Does anyone know what steps I can take to address this error? A: This error can arise when trying to use a tmap function that expects polygons (like tm_polygons(), tm_fill(), or tm_borders()) on an object that contains GeometryCollection geometries (i.e., a mixture of polygons, lines, points, etc. in the same geometry). You can use sf::st_geometry_type(Africa) or sf::st_geometry_type(resolveAfrica) to confirm this is the case. You can use sf::st_collection_extract(Africa, type = "POLYGON") or sf::st_collection_extract(resolveAfrica, type = "POLYGON") to pull out just the polygon geometries from within each of those sf objects, which should then let your tmap function calls work. Even with some of the rows of the object being LINESTRING geometries, tm_fill() and tm_borders() still seems to know what to do with them. In case that doesn't just work, perhaps you can pull out the individual types of geometries using multiple sf::st_collection_extract() function calls and then rbind() them back together. As an example with the Africa dataset: Africa_pts <- sf::st_collection_extract(Africa, type = "POINTS") Africa_lines <- sf::st_collection_extract(Africa, type = "LINES") Africa_polys <- sf::st_collection_extract(Africa, type = "POLYGONS") Africa_combined <- rbind(Africa_pts, Africa_lines, Africa_polys)
Q: Does the android logger run on the main thread? Does the Android logger run on the main thread? If so, does logging big entries have a performance impact on rendering the UI? A: The answer to your first question is it depends on where you are calling Log from so yes, it also runs on the main thread. You are allowed to put log anywhere you want. Since it's printing log into your console it has definitely impacted on your app performance Here is an example from docs : Log.v(TAG, "index=" + i); Don't forget that when you make a call like that when you're building the string to pass into Log.d, the compiler uses a StringBuilder and at least three allocations occur: the StringBuilder itself, the buffer, and the String object. Realistically, there is also another buffer allocation and copy, and even more pressure on the gc. That means that if your log message is filtered out, you might be doing significant work and incurring significant overhead. Read Docs
Q: Check which of an array value is also an object property I have a object 'ecom' which will have a property that is one of ['detail','add','remove','checkout','purchase'] I want to know which of the 5 potential properties the object has. What is the shortest, cleanest way to get that? A: You can use filter() and hasOwnProperty() let arr = ['detail','add','remove','checkout','purchase']; let obj = {detail:'val',add:0,purchase:33} let res = arr.filter(x => obj.hasOwnProperty(x)); console.log(res) Without arrow function let arr = ['detail','add','remove','checkout','purchase']; let obj = {detail:'val',add:0,purchase:33} let res = arr.filter(function(x){ return obj.hasOwnProperty(x) }) console.log(res)
Q: Metal shader in SceneKit to outline an object I'm playing around and trying to implement Metal shader in SceneKit that will outline an object. The idea is to draw an outline (or silhouette) similar to this image found in this blogpost (the blogpost doesn't contain any code): I'm new to SceneKit and Metal shaders so I'm able just to draw some geometry and write simple vertex or fragment shader. I'm curious how can I achieve this kind of effect? Is it done in multiple passes? Cheers! A: The basic idea here is to clone the "selected" node and its geometry, then use a custom vertex and fragment shader to "push" the geometry out along the vertex normals, drawing only the back faces of the cloned geometry with a solid color. I wrote a small sample project to demonstrate this and posted it here. The core Swift code looks like this: let outlineProgram = SCNProgram() outlineProgram.vertexFunctionName = "outline_vertex" outlineProgram.fragmentFunctionName = "outline_fragment" let outlineNode = duplicateNode(node) scene.rootNode.addChildNode(outlineNode) outlineNode.geometry?.firstMaterial?.program = outlineProgram outlineNode.geometry?.firstMaterial?.cullMode = .front The portion of the vertex shader responsible for pushing vertices along their normal looks like this: const float extrusionMagnitude = 0.05; float3 modelPosition = in.position + normalize(in.normal) * extrusionMagnitude; From there, you just apply your typical model-view-projection matrix and return a flat color from the fragment shader.
Q: reduce ytick density for two pgfplots in groupplot please help me adjust the ytick density of my second plot in the groupplot I have shown. I have tried to increase the max space as suggested from another thread but it only works for the first plot and not the second one. I tried positioning in a different area i.e. after addplot, but that didn't work either. Thanks for your time. \begin{figure}[h] \begin{minipage}{\columnwidth} \centering \begin{tikzpicture}[scale = .72, transform shape,trim left] \begin{groupplot}[ group style={ rows=1, columns=2, horizontal sep=0pt, }, scale only axis, xlabel={nm}, ylabel={Abs}, xmin=385, xmax=565, xtick pos=left, ytick pos=left, no marks, max space between ticks=50pt ] \nextgroupplot[title=\textbf{(a)} $50:25$ (CoPor:\Lig{1}),max space between ticks=50pt] \addplot table [col sep=comma, x=nm, y=30] {ST051 5025 Me-IMD SG.csv}; \addplot table [col sep=comma, x=nm, y=600] {ST051 5025 Me-IMD SG.csv}; \nextgroupplot[title=\textbf{(b)} $50:25$ (CoPor:Buffer),ylabel ={}] \addplot table [col sep=comma, x=nm, y=30] {ST051 5025 blank SG.csv}; \addplot table [col sep=comma, x=nm, y=600] {ST051 5025 blank SG.csv}; \end{groupplot} \end{tikzpicture} \end{minipage} \caption{Abs. spectra of $50:25$ CoPor:\Lig{1} and CoPor:Buffer (served as a blank) after 30 \si{\minute} and 600 \si{\minute} showing gradual thermodynamic change of complex. Buffer was 100 \si{\milli\molar} \ce{NaH2PO4}.} \end{figure}
Q: Gaussian Integral over matrix elements with correlation $\mathbf{J}$ is a random matrix where $J_{ij}$ follows a Gaussian distribution. Consider the following integral: $$I=\int\left(\prod_{ij}\mathrm{d}J_{ij}\right) \exp\left\{-\frac{N}{2} \sum_{i, j, k} J_{k i} A_{i j} J_{k j}+N\sum_{k, j} B_{k j} J_{k j}\right\}$$ Where $\mathbf{A}$ and $\mathbf{B}$ are Hermitian. This is a regular Gaussian integral and by completing the square I can obtain (if not mistaken?): $$I=(2 \pi)^{\frac{N^2}{2}}(\operatorname{det} \mathbf{A})^{-N / 2} \exp \left\{\sum_{i,j,k}^{n} \frac{1}{2} B_{ki}\left( A^{-1}\right)_{i j} B_{jk}\right\}$$ However if the elements $J_{ij}$ are correlated and my integral $I$ now becomes: $$I=\int\left(\prod_{ij}\mathrm{d}J_{ij}\right) \exp\left\{-\frac{N}{2} \sum_{i, j, k} J_{k i} A_{i j} J_{k j}+N\sum_{k, j} B_{k j} J_{k j} +\tau N\sum_{ij}J_{ij}J_{ji}\right\}$$ with $-1<\tau<1$. How can I deal with the $\sum_{ij}J_{ij}J_{ji}$ terms? Any remark or advice is always appreciated. Thanks. A: We assume that we are integrating over Hermitian matrices. Completing the square gives \begin{equation}\begin{aligned} &I=(2\pi)^{\frac{N^2}2}(\det A)^{-\frac N2}\int\prod_{i,j}dJ_{ij}\exp\Bigg(-\frac N2\text{tr}\bigg[((A-2\tau1_N)J-B)^\dagger(A-2\tau1_N)^{-1}((A-2\tau1_N)J-B) - \frac N2B(A-2\tau1_N)^{-1}B\bigg]\Bigg) \end{aligned}\end{equation} Doing a linear shift in $J$ by $(A-2\tau 1_N)^{-1}B$ gives us \begin{equation}\begin{aligned} &=(2\pi)^{\frac{N^2}2}(\det A)^{-\frac N2}\int\prod_{i,j}dJ_{ij}\exp\Bigg(-\frac N2\text{tr}\bigg[J^\dagger(A-2\tau1_N)J - \frac N2B(A-2\tau1_N)^{-1}B\bigg]\Bigg). \end{aligned}\end{equation} So we need now to evaluate \begin{equation}\begin{aligned} Z=\int\prod_{i,j}dJ_{ij}\exp\Bigg(-\frac N2\text{tr}\bigg[J^\dagger AJ\bigg]\Bigg). \end{aligned}\end{equation} Since $A$ is Hermitian, there exists a unitary $U$ such that $A=UDU^\dagger$, for $D=\text{diag}(\lambda_1,\dots,\lambda_N)$ and we assume $\lambda_i\in\mathbb R_{>0}$. We do the change of variables $U^\dagger MU= J$. With this change of variables \begin{equation}\begin{aligned} \text{tr}(JAJ)&=\sum_i\lambda_iM_{ii}^2+\sum_{i\neq j}(\lambda_i+\lambda_j)\left((M_{ij}^{(r)})^2+(M_{ij}^{(im)})^2\right)\\ &=\sum_i\lambda_iM_{ii}^2+2\sum_{i<j}(\lambda_i+\lambda_j)\left((M_{ij}^{(r)})^2+(M_{ij}^{(im)})^2\right), \end{aligned}\end{equation} where $M_{ij}^{(r)}$ is the real part of $M_{ij}$ and $M_{ij}^{(im)}$ is the imaginary part. Since $\det U$ has determinant 1, this means we can write \begin{equation}\begin{aligned} Z&=\int\prod_{i,j}dM_{ij}\exp\Bigg(-\frac N2\bigg[\sum_i\lambda_iM_{ii}^2+2\sum_{i<j}(\lambda_i+\lambda_j)\left((M_{ij}^{(r)})^2+(M_{ij}^{(im)})^2\right)\bigg]\Bigg)\\ &=\frac{(2\pi / N)^{N^2/2}}{\sqrt{\det A}\prod_{i<j}(\lambda_i+\lambda_j)}. \end{aligned}\end{equation} Going back to $I$, if we assume that the eigenvalues of $A$ are all greater than $2\tau$, then the integral is convergent and we obtain \begin{equation} I=(2\pi/\sqrt{N})^{N^2}\exp\left(\frac N2\text{tr}\left[B(A-2\tau1_N)^{-1}B\right]\right)\frac{1}{\sqrt{\det(A-2\tau 1_N)}(\det A)^{N/2}\prod_{i<j}(\lambda_i+\lambda_j)}. \end{equation} I'm going to guess that the normalisation constant for the integral is incorrect. If the normalisation constant was $$ C=\left(\frac{2\pi}{N^2/2}\right)^{-N^2}\sqrt{\det A}\prod_{i<j}(\lambda_i+\lambda_j), $$ then the integral would be 1 at $\tau=0$. As you pointed out, this is not the problem you had. You had $J_{ij}$ where $J$ is real valued, and the term you added was Tr$(J^2)$. Now we can decompose $J=J^{(s)} + J^{(a)}$ where $J^{(s)}$ is symmetric and $J^{(a)}$ is antisymmetric. Then \begin{align} \text{Tr}(J^2)&=\text{Tr}((J^{(s)})^2 + (J^{(a)})^2+2J^{(s)}J^{(a)})\\ &=\text{Tr}((J^{(s)})^2 + (J^{(a)})^2)\\ &=\text{Tr}((J^{(s)})^TJ^{(s)} - (J^{(a)})^TJ^{(a)})\\ \text{Tr}(JAJ^T)&=\text{Tr}(J^{(s)}AJ^{(s)} - J^{(a)}AJ^{(a)})\\ &=\text{Tr}(J^{(s)}AJ^{(s)} + (J^{(a)})^TAJ^{(a)}). \end{align} This uses the fact that the trace of an antihermitian matrix is zero. Now change variables to $K$ where $K^{(s)}=J^{(s)}$ and $K^{(a)}=iJ^{(a)}$. Then $K$ is Hermitian and this should reduce to the problem at the top of my answer, except for the issue now that the pure imaginary part multiply $A$ has a negative sign. This means that this integral will actually diverge. This can be saved if $\tau$ is bigger than the absolute value of the eigenvalues of $A$.
Q: Standard way of referencing an object by identity (for, eg, circular references)? Is there a standard way of referencing objects by identity in JSON? For example, so that graphs and other data structures with lots of (possibly circular) references can be sanely serialized/loaded? Edit: I know that it's easy to do one-off solutions ("make a list of all the nodes in the graph, then …"). I'm wondering if there is a standard, generic, solution to this problem. A: Douglas Crockford has a solution that uses JSONPath (an Xpath-like syntax for describing json paths). It seems fairly sane: https://github.com/douglascrockford/JSON-js/blob/master/cycle.js A: I was searching on this same feature recently. There does not seem to be a standard or ubiquitous implementation for referencing in JSON. I found a couple of resources that I can share: * *The Future for JSON Referencing http://groups.google.com/group/json-schema/browse_thread/thread/95fb4006f1f92a40 - This is just a discussion on id-based referencing. * *JSON Referencing in Dojo http://www.sitepen.com/blog/2008/06/17/json-referencing-in-dojo/ - An implementation in Dojox (extensions for the Dojo framework) - discusses id-based and path based referencing. * *JSONPath - XPath for JSON http://goessner.net/articles/JsonPath/ - This seems to be an attempt at establishing a standard for path based JSON referencing - maybe a small subset of XPath (?). There seems to be an implementation here but I kept getting errors on the download section - you might have better luck. But again this is no where close to a standard yet. A: There is the "JSON Reference" specification, but it seems it didn't got over the state of an expired Internet draft. Still, it seems to be used in JSON Schema and Swagger (now OpenAPI) (for reusing parts of an API description in other places of the same or another API description). A reference to an object in the same file looks like this: { "$ref": "#/definitions/Problem" }. A: There is no canonical way to achieve that. JSON does not have a native support for references, so you have to invent your own scheme for unique identifiers which will act as pointers. If you really want to make it generic you could use the object identifiers provided by your programming language (eg. object_id in Ruby or id(obj) in Python).
Q: Xorg log full of RandR output I am running Ubuntu 14.04.1 on some specialised hardware that directs manipulates the HDMI output before displaying it to the screen. I am noticing that from time to time the Xorg log goes crazy with constant output as if xrandr is being called and logged again and again and again. Afters a few days of nothing in the Xorg log these lines get repeated every 500 milliseconds. [397158.319] (II) RADEON(0): EDID vendor "UHD", prod id 0 [397158.320] (II) RADEON(0): Using hsync ranges from config file [397158.320] (II) RADEON(0): Using vrefresh ranges from config file [397158.320] (II) RADEON(0): Printing DDC gathered Modelines: [397158.320] (II) RADEON(0): Modeline "3840x2160"x0.0 594.00 3840 4016 4104 4400 2160 2168 2178 2250 +hsync +vsync (135.0 kHz eP) [397158.320] (II) RADEON(0): Modeline "3840x2160"x0.0 297.00 3840 4016 4104 4400 2160 2168 2178 2250 +hsync +vsync (67.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1920x1080i"x0.0 74.25 1920 2448 2492 2640 1080 1084 1094 1125 interlace +hsync +vsync (28.1 kHz e) [397158.320] (II) RADEON(0): Modeline "1366x768"x0.0 85.50 1366 1436 1579 1792 768 771 774 798 +hsync +vsync (47.7 kHz e) [397158.320] (II) RADEON(0): Modeline "800x600"x0.0 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [397158.320] (II) RADEON(0): Modeline "640x480"x0.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [397158.320] (II) RADEON(0): Modeline "640x480"x0.0 31.50 640 664 704 832 480 489 492 520 -hsync -vsync (37.9 kHz e) [397158.320] (II) RADEON(0): Modeline "640x480"x0.0 30.24 640 704 768 864 480 483 486 525 -hsync -vsync (35.0 kHz e) [397158.320] (II) RADEON(0): Modeline "640x480"x0.0 25.18 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [397158.320] (II) RADEON(0): Modeline "720x400"x0.0 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1280x1024"x0.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz e) [397158.320] (II) RADEON(0): Modeline "1024x768"x0.0 78.75 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.0 kHz e) [397158.320] (II) RADEON(0): Modeline "1024x768"x0.0 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1024x768"x0.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [397158.320] (II) RADEON(0): Modeline "832x624"x0.0 57.28 832 864 928 1152 624 625 628 667 -hsync -vsync (49.7 kHz e) [397158.320] (II) RADEON(0): Modeline "800x600"x0.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [397158.320] (II) RADEON(0): Modeline "800x600"x0.0 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [397158.320] (II) RADEON(0): Modeline "1152x864"x0.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1280x720"x60.0 74.48 1280 1336 1472 1664 720 721 724 746 -hsync +vsync (44.8 kHz e) [397158.320] (II) RADEON(0): Modeline "1280x800"x0.0 71.00 1280 1328 1360 1440 800 803 809 823 +hsync -vsync (49.3 kHz e) [397158.320] (II) RADEON(0): Modeline "1280x1024"x0.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz e) [397158.320] (II) RADEON(0): Modeline "1440x900"x0.0 88.75 1440 1488 1520 1600 900 903 909 926 +hsync -vsync (55.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1600x900"x60.0 119.00 1600 1696 1864 2128 900 901 904 932 -hsync +vsync (55.9 kHz e) [397158.320] (II) RADEON(0): Modeline "1680x1050"x0.0 119.00 1680 1728 1760 1840 1050 1053 1059 1080 +hsync -vsync (64.7 kHz e) [397158.320] (II) RADEON(0): Modeline "720x576"x0.0 27.00 720 732 796 864 576 581 586 625 -hsync -vsync (31.2 kHz e) [397158.320] (II) RADEON(0): Modeline "1920x1080"x0.0 74.25 1920 2558 2602 2750 1080 1084 1089 1125 +hsync +vsync (27.0 kHz e) [397158.320] (II) RADEON(0): Modeline "2880x480"x0.0 108.00 2880 2944 3192 3432 480 489 495 525 -hsync -vsync (31.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1920x1080"x0.0 74.25 1920 2008 2052 2200 1080 1084 1089 1125 +hsync +vsync (33.8 kHz e) [397158.320] (II) RADEON(0): Modeline "1920x1080"x0.0 74.25 1920 2448 2492 2640 1080 1084 1089 1125 +hsync +vsync (28.1 kHz e) [397158.320] (II) RADEON(0): Modeline "1440x480i"x0.0 27.00 1440 1478 1602 1716 480 488 494 525 interlace -hsync -vsync (15.7 kHz e) [397158.320] (II) RADEON(0): Modeline "1440x576i"x0.0 27.00 1440 1464 1590 1728 576 580 586 625 interlace -hsync -vsync (15.6 kHz e) [397158.320] (II) RADEON(0): Modeline "1440x288"x0.0 27.00 1440 1464 1590 1728 288 290 293 312 -hsync -vsync (15.6 kHz e) [397158.320] (II) RADEON(0): Modeline "1280x720"x0.0 74.25 1280 1720 1760 1980 720 725 730 750 +hsync +vsync (37.5 kHz e) [397158.320] (II) RADEON(0): Modeline "1440x240"x0.0 27.00 1440 1478 1602 1716 240 244 247 262 -hsync -vsync (15.7 kHz e) [397158.320] (II) RADEON(0): Modeline "1920x1080i"x0.0 74.25 1920 2008 2052 2200 1080 1084 1094 1125 interlace +hsync +vsync (33.8 kHz e) [397158.320] (II) RADEON(0): Modeline "1280x720"x0.0 74.25 1280 1390 1430 1650 720 725 730 750 +hsync +vsync (45.0 kHz e) [397158.320] (II) RADEON(0): Modeline "720x480"x0.0 27.00 720 736 798 858 480 489 495 525 -hsync -vsync (31.5 kHz e) The hardware uses an AMD Radeon HD 8210 graphics chip but the display device that is connected to it is weird. It presents as a 4K screen but needs to be run at 1920x540 since it is half height. It has been reported that after a while X just dies. What might be happening?
Q: Generate and read password protected ZIP with Javascript With Angular / Javascript a password protected ZIP file should be generated. JSZip cannot generate or read password protected files. Is there a workaround or other tools that can be used? A: You can try using archiver (https://www.npmjs.com/package/archiver) in combination with archiver-zip-encrypted (https://www.npmjs.com/package/archiver-zip-encrypted). It also comes with types for TypeScript (https://www.npmjs.com/package/@types/archiver). This is a solution if you work on desktop application and have adjusted your pipeline, a web application will not have an access to the filesystem.
Q: Linux : Copy Multiple files and remove it's extension I have got a directory containing files of type *.cpp.So i would like to copy each file in the directory and to paste it in the same directory by using cp -a *.cpp with an option to remove the .cpp while pasting.Is it possible ? A: Here is a simple bash script. This script assumes that the file name only contains one "." character and splits based on that. #!/bin/sh for f in *.cpp; do #This line splits the file name on the delimiter "." baseName=`echo $f | cut -d "." -f 1` newExtension=".new" cp $f $baseName$newExtension done A: You can do this just by means of bash parameter extension, as mentioned in the bash manual: ${parameter%%word} Remove matching suffix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``%'' case) or the longest matching pattern (the ``%%'' case) deleted. ... for i in *.cpp do cp -a $i ${i%%.cpp} done A: Can use rename, optionally with -f to force rewrite existing files. rename -f 's/\.ext$//' *.ext To preview actions, but don't change files, use -n switch (No action). This is not copy but move :-(
Q: Removing need for passphrase key in Google Cloud Projects I am trying to set up datalab notebooks in a Google Cloud project. I screwed up and entered a passphrase during the first $ datalab connect INSTANCE_NAME install.I quickly realized that I wished I hadn't done that, so I deleted the instance and tried to reinstall. It asked again. So, I did a bit of googling (after just deleting the new project and creating a new one), and discovered that the passphrase is required across projects. So, I went to the metadata tab and deleted it through there- but it comes back whenever I try and create an instance (on any project) through the terminal. Ok. So, I tried using gcloud to change the instance to not need the project passphrase, using $ cloud compute instances add-metadata [INSTANCE_NAME] --metadata block-project-ssh-keys=TRUE Same thing. Please, what the heck am I missing? How do I just permanently remove the need for a passphrase when setting up an instance in datalab from the ssh terminal? I wouldn't mind using the passphrase so much, but whenever I enter it, the terminal just stops (not hard stop- it just sits there without processing until I ctrl+C and force stop. I can type and enter and whatever, but it doesn't register my passphrase.) Any help would be greatly appreciated. FYI, I am setting all this up using a stock Pixelbook. That shouldn't matter since everything is through Google Cloud, but there ya go. Thanks! A: The passphrase isn't tied to your GCP project or Datalab instances in any way. Instead, it is a property of your local private SSH key. This file usually winds up under ~/.ssh and is named something like google_compute_engine. Since you mention using a Pixelbook, I assume you are running the datalab connect command from Cloud Shell. In that case, this file is stored inside of your Cloud Shell instance. Delete that file, and then the next run of datalab connect will generate a new one (for which you can leave the passphrase empty).
Q: Remove numbers at beginning of filenames in directory in bash In an attempt to rename the files in one directory with numbers at the front I made an error in my script so that this happened in the wrong directory. Therefore I now need to remove these numbers from the beginning of all of my filenames in a directory. These range from 1 to 3 digits. Examples of the filnames I am working with are: 706terrain_Slope1000m_Minimum_all_25PCs_bolt_all_25PCs_qq_bolt.png 680met_sfcWind_all_25PCs_bolt_number.txt 460greenness_NDVI_500m_min_all_25PCs_bolt_number.txt I was thinking of using mv but I'm not really sure how to do it with varying numbers of digits at the beginning, so any advice would be appreciated! A: A simple way in bash is making use of a regular expression test: for file in *; do [[ -f "${file}" ]] && [[ "${file}" =~ (^[0-9]+) ]] && mv ${file} ${file/${BASH_REMATCH[1]}} done This does the following: * *[[ -f "${file}" ]]: test if file is a file, if so *[[ "${file}" =~ (^[0-9]+) ]]: check if file starts with a number *${file/${BASH_REMATCH[1]}}: remove the number from the string file by using BASH_REMATCH, a variable that matches the groupings from the regex match. A: If you've got perl's rename installed, the following should work : rename 's/^[0-9]{1,3}//' /path/to/files /path/to/files can be a list of specific files, or probably in your case a glob (e.g. *.{png,txt}). You don't need to select only files starting with digits as rename won't modify those that do not. A: Using bash parameter expansion: shopt -s extglob for i in +([0-9])*.{txt,png}; do mv -- "$i" "${i##+([0-9])}" done This will remove starting digits (any number) in filenames having png and txt extension. The ## is removing the longest matching prefix pattern. The +(...) is path name expansion syntax for repeated characters. And [0-9] is pattern matching digits. A: Alternate method using GNU find: #!/usr/bin/env bash find ./ \ -maxdepth 1\ -type f\ -name '[[:digit:]]*'\ -exec bash -c 'shopt -s extglob; f="${1##*/}"; d="${1%%/*}"; mv -- "$1" "${d}/${f##+([[:digit:]])}"' _ {} \; Find all actual files in current directory whose name start with a digit. For each found file, execute the Bash script below: shopt -s extglob # need for extended pattern syntax f="${1##*/}" # Get file name without directory path d="${1%%/*}" # Get directory path without file name mv -- "$1" "${d}/${f##+([[:digit:]])}" # Rename without the leading digits A: Using basic features of a POSIX-compliant shell: #!/bin/sh for f in [[:digit:]]*; do if [ -f "$f" ]; then pf="${f%${f#???}}" pf="${pf##*[[:digit:]]}" mv "$f" "$pf${f#???}" fi done
Q: "stateless" consistency: Constantly check if client is still connected? So the use case is this thin line between being stateful and not being stateful -- payment apis. So when I worked for a payment gateway, our connection to the processor was over TCP, so it was easy to verify that the client or server got the entire message, but when you have to provide a REST API which is supposed to be stateless, it's harder to know. A lot of scenarios can lead to duplicate transactions such as: * *Mobile app sends a payment request *Server processes the message *Mobile app loses its connection *Server returns a response but the client never gets it so it doesn't know if it was successful or not. *Mobile app sends the same payment request again On one hand, we could place a cache in between that basically locks the same transaction from being performed again (client has to provide a unique operation/transaction id that we use), but I feel like that comes with other complexities like invalidation. I wonder if at least this scenario could be covered using wire protocol in .net? So I thought to try something like this: public async Task<IActionResult> Do(CancellationToken abort) { // simulate processing await Task.Delay(5_000); // see if client is still connected if (abort.IsCancellationRequested) { // if its not, clean up or rollback etc. Console.WriteLine("do a rollback"); } return Ok(); } The problem with this is that, not only can the client still lose connection while writing the response, but even the check itself could be wrong. For example, if the client loses connection, then it never sent the disconnect command, we'd still think they were connected until the server keep-alive fails and it times out and by the time it does, the client may have already started a retry. I'm wondering if there is a way to have my service rapidly send keep-alives (ex. .5-1 second intervals) so that we can fail early and roll back. And the followup question is: is there anyway to check after return Ok() that the client received the full response? Maybe with middleware that can dig out an id and throw if (to trigger a rollback) the response wasn't fully read? A: when I worked for a payment gateway, our connection to the processor was over TCP, so it was easy to verify that the client or server got the entire message Very similar problems exist at the TCP level. You can have the client send an ack message, but what happens if connectivity is lost immediately after the server receives it? The server would not be able to send a TCP ACK, so as far as the client knows, the server never got the ack message and it should re-send the transaction. The window may be smaller, but this problem never goes away completely; that's the nature of distributed computing. On one hand, we could place a cache in between that basically locks the same transaction from being performed again (client has to provide a unique operation/transaction id that we use), but I feel like that comes with other complexities like invalidation. The standard solution is to make requests idempotent if possible. This can be done with a cache; usually some long lifetime like 7 or 30 days is easy to implement and leaves very little room for missed transactions. My favorite implementation for this kind of "de-duplication" cache is CosmosDb because it's highly reliable, fast, and supports expiration. Bonus if you add a timestamp to the transaction and have the client refuse to send ones (or the server refuse to accept ones) that are too old. Then your requests are idempotent, and the client can call them all day if it wants. I wonder if at least this scenario could be covered using wire protocol in .net? Not without a lot of difficulty to achieve something of very questionable benefit. if the client loses connection, then it never sent the disconnect command, we'd still think they were connected until the server keep-alive fails and it times out and by the time it does, the client may have already started a retry. Yup. I'm wondering if there is a way to have my service rapidly send keep-alives (ex. .5-1 second intervals) so that we can fail early and roll back. Probably not. It might be possible to get at the underlying socket (though I doubt it), and then you could do some hacky stuff to turn on per-socket TCP/IP keepalives. But even if this were possible, it wouldn't get you much. TCP/IP keepalives can be dropped by any intervening network (they're an optional part of the TCP/IP specification). And even if that were possible and it actually worked, then you'd just have a smaller window for the exact same problem to occur - the problem hasn't actually been solved. And the followup question is: is there anyway to check after return Ok() that the client received the full response? Maybe with middleware that can dig out an id and throw if (to trigger a rollback) the response wasn't fully read? Nope. The HTTP protocol gives you one response per request; that's it. Technically, the socket would either know that the client received the entire response or that the client may or may not have received the entire response. That information isn't exposed to your app by any framework I'm aware of, mainly because it's not really useful information. TL;DR: Design for idempotency. Use the unique id you are given, and count yourself blessed that you even have that. Not all systems do.
Q: ModuleNotFoundError: folder structure problem in my scrapy project? I am new to scrapy and vscode, and my project was working perfectly fine until I decided to get tidy with the folders before uploading on github . After that, whole project is not working anymore. I am pretty sure I messed up the folder structure: └── real_estate/ ├── project1/ │ ├── project1_scrapy/ │ │ ├── spiders/ │ │ │ ├── __init__.py │ │ │ └── project1__spider.py │ │ ├── items.py │ │ ├── middlewares.py │ │ ├── pipelines.py │ │ └── settings.py │ └── scrapy.cfg └── project2/ ├── project2_scrapy/ │ ├── spiders/ │ │ ├── __init__.py │ │ └── project2__spider.py │ ├── items.py │ ├── middlewares.py │ ├── pipelines.py │ └── settings.py └── scrapy.cfg I am running to crawler on the folder of scrapy.cfg. Still getting the following error: File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'project1' A: You are getting ModuleNotFoundError: No module named 'project1' because you can't change/rename the project that once you've created by the command scrapy startproject that would be project1 project1 but in your case.it's project1 project1_scrapy It seems to be clear that you have renamed/added _scrapy with project1 that's why scrapy can't find the project1 and shows the mentioned error. If you go to your project's settings.py file then you can see your project name like : BOT_NAME = 'project1' SPIDER_MODULES = ['project1.spiders'] NEWSPIDER_MODULE = 'project1.spiders' So remove the _scrapy from project1_scrapy or create new project and never change/rename the project or make correct spider module name from settings.fy file according to the project folder. If you change something like bot name/spider modules You also need to change that portion from settings.py file Like You have changed the module name from your project folder project1 to project1_scrapy so you also have to change SPIDER_MODULES = ['project1_scrapy.spiders'] If you don't change anything then, Your project folders structure in vscode would be as above screenshot A: You can change the SPIDER_MODULES of project1/project1_scrapy/settings.py to make the scrapy search for the correct directory for the spider https://docs.scrapy.org/en/latest/topics/settings.html#spider-modules For your case, SPIDER_MODULES = ["project1_scrapy.spiders"]
Q: Coredata fetchedObjects count throws exc_bad_access First let me explain the flow of my app, When I launch the app I check if the user is logged in, I do this check in -(void) viewWillAppear:(BOOL)animated if not I show the login controller. Now this works perfectly. In my loadView I access my Code Data stack and try and get the fetchedObjects to show in a table view, by clicking one of the cells I show more information on the clicked cell's object. This is how I do it. AppDelegate *app = (AppDelegate*)[[UIApplication sharedApplication] delegate]; NSManagedObjectContext *context = [app managedObjectContext]; NSFetchRequest* fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Sites" inManagedObjectContext:context]; NSError *error; [fetchRequest setEntity:entity]; fetchedObjects = [context executeFetchRequest:fetchRequest error:&error]; Now when the view loads the first time i get the following in the debugger (lldb) po fetchedObjects (NSArray *) $1 = 0x00352860 <_PFArray 0x352860>( <NSManagedObject: 0x352550> (entity: Sites; id: 0x34c9b0 <x-coredata://7CD0A735-BC41-4E7A-8B07-C957E6096320/Sites/p1> ; data: <fault>) ) Which appears to be fine. Now viewWillAppear gets called and the login view gets shown and the user gets logged in and the login view is popped from the navigation stack, then the tableview's cellforrowatindexpath gets called again, when I break there and I po my fetchedObjects again I get (lldb) po fetchedObjects (NSArray *) $3 = 0x00352860 [no Objective-C description available] This I don't understand, why does the data not get persisted? The exception gets thrown in - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [fetchedObjects count]; } fetchedObjects is a member of the class, I have no release for it yet and I never change its value? A: fetchedObjects = [context executeFetchRequest:fetchRequest error:&error]; Returns an autoreleased object. If you want it to hang around, you have to retain it or assign it via your retained property accessor, if it exists (this is the preferred method): self.fetchedObjects = [context executeFetchRequest:fetchRequest error:&error];
Q: Problems in creating graphs using dataplot I have four simple databases: db1 dy,volume 2009,120000 2010,160000 2011,400000 2012,650000 2013,1000000 2014,1500000 db2 dy,volume 2009,400000 2010,500000 2011,1600000 2012,2200000 2013,2500000 2014,4000000 db3 dy,volume 2009,100000 2010,120000 2011,150000 2012,160000 2013,400000 2014,1000000 db4 dy,volume 2009,250000 2010,400000 2011,750000 2012,900000 2013,1400000 2014,3000000 And with these four files I like to create a simple graphical representation like this. I am using dataplot package for this purpose. And here is my source file: test.tex \documentclass{book} \usepackage{dataplot} \begin{document} \DTLloaddb{db1}{db1.csv} \DTLloaddb{db2}{db2.csv} \DTLloaddb{db3}{db3.csv} \DTLloaddb{db4}{db4.csv} \begin{figure}[htbp] \centering \DTLplot{db1,db2,db3,db4}{x=dy,y=volume, width=3in,height=3in,style=lines,legend,legendlabels={Legend1,Legend2,Legend3,Legend4}, xlabel={Year},ylabel={Volume},box, xticpoints={2009,2010,2011,2012,2013,2014} } \caption{A simple graph} \end{document} But, I am getting this error over and over saying: Package datatool Error: Can't assign \DTLthisX : there is no key `dy' in database `db1'. If there is someone who could help me out in this regard, either pointing my error or any useful information to handle this. One more question: As my database contains huge numbers (i.e., > 100000), how can I manage to show this on y-axis as 100k and so on. Where k denotes a kilo. Any ideas. A: I'm not familiar with dataplot, but here is an example of what you can do with pgfplots; I've only plotted the first two sets of data, but you can easily build on the existing code to include the last two. Regarding the "huge numbers" on the y-axis, whether you end up using dataplot or pgfplots, I strongly advise you to multiply them by 10^{-6} and specify that factor in the y-label; the numbers become much easier to read (nobody likes to count loads of zeros). However, it's really up to you. I've used filecontents to simulate the existence of your .cvs files. \documentclass{article} \usepackage{filecontents} \usepackage{pgfplots} \pgfplotsset{compat = 1.3} \begin{document} \begin{filecontents*}{db1.cvs} dy,volume 2009,120 2010,160 2011,400 2012,650 2013,1000 2014,1500 \end{filecontents*} \begin{filecontents*}{db2.cvs} dy,volume 2009,400 2010,500 2011,1600 2012,2200 2013,2500 2014,4000 \end{filecontents*} \begin{tikzpicture} \begin{axis}[% width=\textwidth, ylabel shift=1ex, enlargelimits=0.13, tick align=outside, legend style={cells={anchor=west},legend pos=north east}, xtick={2009,2010,...,2014}, xticklabels={2009,2010,2011,2012,2013,2014}, ytick={500,1000,...,4000}, yticklabels={500k,1000k,1500k,2000k,2500k,3000k,3500k,4000k}, xlabel=\textbf{year}, ylabel=\textbf{volume} ] \addplot[mark=none,blue] table [x=dy, y=volume, col sep=comma] {db1.cvs}; \addlegendentry{db1} \addplot[mark=none,red] table [x=dy, y=volume, col sep=comma] {db2.cvs}; \addlegendentry{db2} \end{axis} \end{tikzpicture} \end{document}
Q: Single word for correcting controls out of bounds In a software development environment or page layouting, you can place elements outside the control bounds or outside the page. The coordinates can e.g. have negative values. We're now introducing a button that moves items outside the visible bounds back to inside. I suggested "Bring all controls into view" and "Adjust clipped controls". Is there a shorter, e.g. single word that describes the same thing? A: If you are bringing items that are outside a bounded region into that region, then the items are being constrained. This term is used more widely than for out-of-bound objects on a printed page but is applicable. The bounds of the page are the constraints and the act of bringing the objects back into view is to constrain them. A: Given the need to inform non-technical users (not only programmers) what will happen when they invoke the feature, I'd suggest: Show hidden controls (may look jumbled) If you want to avoid alarm, perhaps "invisible" instead of "hidden".
Q: Why is my fourier transform negative in python? How do I fix it? I am trying to code a Fourier Transform in Python so I can take the transform some data from a couple signals, but for for some reason, my results have a weird negative component to them. Searching around the web, I cannot figure out what I am doing wrong. #Library function calls import scipy.fft as ft #This library helps with performing the transform import matplotlib.pyplot as plt #This will allow us to plot the data import numpy as np #This will allow us to use arrays #Import the textfiles made by LabView time, radio, trial = np.loadtxt("trialC4.xls",float,unpack=True) print("time:", time) print("") print("radio:", radio) print("") print("trial:", trial) #Take the Fourier Transform of the signals fft_radio = ft.rfft(radio) fft_trial = ft.rfft(trial) #Plot the signals plt.plot(time,radio) plt.plot(time,trial) plt.show() plt.plot(fft_radio) plt.show() plt.plot(fft_trial) plt.show Here are the results of this code The signals before being transformed: The first signal after transformation: The second signal after transformation: What do I need to do so the transformed signals are not negative? A: Fourier transforms always go from complex numbers to complex numbers. Scalar values are also elements of the set of complex numbers (just with the imaginary component being 0). And complex numbers live in the complex plane, where you can also have negative values for the imaginary and real component respective. If you want the spectrum power density, you must take the absolute value of each channel (i.e. the magnitude), calculated as sqrt(Re(x)² + Im(x)²) With numpy you can put them through a simple numpy.abs(fft(...))
Q: git: how to rebase a branch after it was merged and keep the merge commit's changes I have a history like this: A - B - M \ / C A, B and M are master, C is on a feature branch. I made two mistakes: * *I didn't realize that the company remote doesn't accept merge commits before I made it. *I changed a lot of things in the Merge commit apart from simply resolving the conflict. I wanted to rebase, so it would look like A - B - C - M, C - M probably squashed together. I only found one question on the internet which actually looked quite similar to my case, the only response was "merge is fine". I admit I'm still not 100% familiar with the rebase syntax, but any combination I told git to rebase, with or without -p and/or -i, it either said there is nothing to rebase (noop) or said it's not working. What seemed to be the logical choice is to step on C and rebase -ip master, but it didn't quite do what I expected it would. A: Given this history: A - B - M \ / C At M, of you soft reset to B, and commit, then you will end up with A - B - M' which seems to be what you want: git checkout M git reset B git commit The content of the branch will remain the same, none of these commands change that, only C gets eliminated from the history, making it look like a straight branch.
Q: Python, Selenium... not able to find an element which is obviously there I am trying to use Python (Selenium) to extract data from this site: https://sin.clarksons.net/ After I put in user name and password, it is not able to click the obvious "Submit" bottom. Can some of you help to see why? TIA. import time from selenium import webdriver from selenium.webdriver.common.keys import Keys if __name__ == "__main__": try: chrome_path = r"C:\Users\xxx\Downloads\chromedriver_win32\chromedriver.exe" driver = webdriver.Chrome(chrome_path) driver.get("https://www.clarksons.net/") driver.maximize_window() time.sleep(5) login = driver.find_element_by_xpath('//*[@id="menu"]/li[1]/span') time.sleep(5) login.click() time.sleep(5) username = driver.find_element_by_xpath('//input[@id="usernameText"]') username.clear() username.send_keys("abc(at)hotmail.com") password = driver.find_element_by_xpath('/html/body/div[6]/div/div/div[2]/form/div[2]/div/input[1]') password.clear() password.send_keys("xyzabc") submit = driver.find_element_by_xpath('/html/body/div[6]/div/div/div[2]/form/div[4]/div/div/button') submit.click() time.sleep(5) print "login" driver.quit() except Exception as e: print e driver.quit() A: Try this replace xpath with id and use css selector for login button username = driver.find_element_by_id("usernameText") username.clear() username.send_keys("[email protected]") password = driver.find_element_by_id("passwordText") password.clear() password.send_keys("xyzabc") #submit = driver.find_element_by_xpath(".//button[@title='Login']") submit = driver.find_element_by_css_selector("#home button.btn- primary") submit.click() A: You can find the login button by title: submit = driver.find_element_by_xpath('//button[contains(@title, "Login")]') submit.click() OR You can find the form and then submit (find base on class): submit_form = driver.find_element_by_xpath('//form[starts-with(@class, "ng-valid ng-dirty")]') submit_form .submit() Hope this help. A: The xpath you are using is wrong, the correct one is '/html/body/div[6]/div/div/div[2]/form/div[4]/button' As a side note, you really shouldn't use absolute xpath, for example you can use '//button[@title="Login"]' for the login button.
Q: How to search between two datetime in php MySQL? I have a table called reports in MySQL(MariaDB) . There is a one(out of 5) column named logdate which is is of type datetime .columns stores the the date and time (in 24hr format) . for ex here is sample value from that column 2021-04-10 09:35:00 I have to find all reports between a given date and time . I get 4 variables from form data in PHP $fromdate= $_POST['fromdate']; $todate= $_POST['todate']; $fromtime= $_POST['fromtime']; $totime= $_POST['totime']; $fromtime and $totime are just integers with value from 0-23 for hours. For example the condition may be like get all data between 4th April 2021 from 5 o'clock To 8 April 2021 18 o'clock i.e. From 2021-04-04 03:00:00 to 2021-04-08 18:00:00. There will be never condition on minutes and seconds . My question is how to construct a datetime in PHP compatible with MySQL types so I can have good(efficient, there are millions of records in table ) search speed? for ex $select = "select * from reports where logdate between ? and ? "; P.S: I tried saving date and time as integer as unixtime stamp. But when i convert from and to date received using strttotime() I facing time format issue due to bug in my code which so can use datetime only. If you have any suggestion to improve efficiency of DB please suggest.Thanks A: Hi this link may be of help in optimizing date comparison MySQL SELECT WHERE datetime matches day (and not necessarily time) This one below, will help you in formatting your strtotime() by using strptime() https://www.php.net/manual/en/function.strptime.php Also check your spelling or typo; you wrote "strttotime()" instead of "strtotime()" yours has an extra 't' in str"tto"time, it should be str"to"time, though without the double qoutes A: Though I can't say for sure this is the most effective way but you can use hour(logdate) to compare with $fromdate and $todate $select = "select * from reports where hour(logdate) between ? and ? "; But it will only compare hour part. Please mention how you are getting date part to compare? A: It is not a good idea to make a calculation on a field in the WHERE CLAUSE. In this case MySQL / MariaDB must calculate the value from this field to comapare it to see if this ROW has this condition. So MySQL must read the whole table FULL TABLE SCAN and CANT use any INDEX. A better way to do this is to store the calculation on fix site. Then MySQL calculated it only one time and can use a Index ( if there one) . you can easy use a query like this: $select = "SELECT * FROM reports where logdate between date(?) + INTERVAL ? HOUR AND date(?) + INTERVAL ? HOUR "; to test see: SELECT date('2021-04-05') + INTERVAL 16 HOUR; result: 2021-04-05 16:00:00 A: Here is what is working for me after using Bernds solution . I constructing datetime string in php $fromstr ="$fromdate"." "."$fromtime".":00:00"; $tostr="$todate"." "."$totime".":00:00"; here is my query looks like for date of 7th April to 10th April $ select = "SELECT * FROM reports where logdate >= '$fromstr' and logdate <= '$tostr' order by logdate"; after echoing it "SELECT * FROM reports where logdate >= '2021-04-07 3:00:00' and logdate <= '2021-04-10 5:00:00' order by logdate";``` However I am not sure if can use index for logdate column and utilize it with above query.
Q: Rails: Active Record Timeout This is a piece of code in place. When I add this to the cron with timeout the entire array gets saved twice. When I remove timeout nothing gets saved In this scenario we would want to save the array results (coming in from an api) with over 100k records to be saved to the db. I have used bulk insert and TinyTds gems here ActiveRecord::Base.establish_connection(adapter: 'sqlserver', host: "xxx", username: "xxx", password: "xxx", database: "xxx", azure: true, port: 1433, timeout: 5000) class Report < ActiveRecord::Base self.primary_key = 'id' end my_array = [] #count of 100000 records Report.bulk_insert(:account_owner_id) do |worker| my_array.drop(2).each do |arr| worker.add account_owner_id: arr[0] end end A: You can try removing timeout and adding ignore: true to your bulk insert as shown here. There may be an insert that is failing. Report.bulk_insert(:account_owner_id, ignore: true) do |worker|
Q: Read XML into DataTable I have the following XML being returned for a WebRequest: <DataSet> <xs:schema id="FoxProDataTable" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"> <xs:element name="FoxProDataTable" msdata:IsDataSet="true" msdata:UseCurrentLocale="true"> <xs:complexType> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="FoxProDataRow"> <xs:complexType> <xs:sequence> <xs:element name="d_alias" minOccurs="0"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:maxLength value="8" /> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="d_audit" type="xs:boolean" minOccurs="0" /> <xs:element name="d_auditkey" minOccurs="0"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:maxLength value="50" /> </xs:restriction> </xs:simpleType> </xs:element> . . . . . . <xs:element name="d_version" type="xs:decimal" minOccurs="0" /> <xs:element name="d_custom" type="xs:boolean" minOccurs="0" /> </xs:sequence> </xs:complexType> </xs:element> </xs:choice> </xs:complexType> </xs:element> </xs:schema> <diffgr:diffgram xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"> <FoxProDataTable> <FoxProDataRow diffgr:id="FoxProDataRow1" msdata:rowOrder="0"> <d_alias>ADJ </d_alias> <d_audit>false</d_audit> <d_auditkey xml:space="preserve"> </d_auditkey> <d_auditon>false</d_auditon> <d_chadate xml:space="preserve"> </d_chadate> <d_convert xml:space="preserve"> </d_convert> <d_create>ADJ </d_create> <d_desc>Employer Quarter Adjustment </d_desc> <d_encrypt>true</d_encrypt> <d_file>PRQTRADJ</d_file> <d_key1>Company </d_key1> <d_key2 xml:space="preserve"> </d_key2> <d_key3 xml:space="preserve"> </d_key3> <d_key4 xml:space="preserve"> </d_key4> <d_massup>false</d_massup> <d_msc>false</d_msc> <d_parent xml:space="preserve"> </d_parent> <d_prod>PR</d_prod> <d_recsize>0</d_recsize> <d_required>true</d_required> <d_type>R </d_type> <d_version>9.100100</d_version> <d_custom>false</d_custom> </FoxProDataRow> </FoxProDataTable> </diffgr:diffgram> </DataSet> I am trying to read this into a DataSet/DataTable like so: XmlDocument _xmlDoc = GetResponseAsXml(_url, _request, HttpMethods.GET); DataSet _dataSet = new DataSet(); _dataSet.ReadXml(new XmlTextReader(new StringReader(_xmlDoc.OuterXml))); DataTable _dataTable = _dataSet.Tables[0]; When I inspect _dataTable, the columns match the schema: but the Row has the following: How can I get the data inside into the table? A: I quickly tested this with your XML saved in a file.... loaded the XML into the DataSet and hooked it up to a dataGridView to see your data..... DataSet _dataSet = new DataSet(); _dataSet.ReadXml(@"<Path to your XML>"); dataGridView1.DataSource = _dataSet.Tables[0]; This worked fine, data all there in the dataGridView... hope that helps..
Q: Understanding multiple regression output I am a first year psychology student. I am doing some research work with a prof, unfortunately the material that I need to use right now is covered only in my second year. But I need to already know it now. So I am burning through any resources I can find to quickly come up to speed. I need help to understand this particular situation here. Involves SAS, Regression Analysis. When I ran a regression in SAS ( proc reg ) using two variables say a and b. I got this. I understand this as saying that both these variables (a&b) do not significantly predict my target variable. Here is the SAS output. Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model 2 3.32392 1.66196 1.00 0.3774 Error 46 76.80649 1.66971 Corrected Total 48 80.13041 Root MSE 1.29217 R-Square 0.0415 Dependent Mean -0.23698 Adj R-Sq -0.0002 Coeff Var -545.26074 Parameter Estimates Parameter Standard Standardized Variable DF Estimate Error t Value Pr > |t| Estimate Intercept 1 -0.25713 0.18515 -1.39 0.1716 0 a 1 -0.35394 0.28797 -1.23 0.2253 -0.19510 b 1 -0.04706 0.39586 -0.12 0.9059 -0.01887 Now I tried to include the interaction of a and b into the picture. Lets call it aXb, now the out put indicates that a and aXb significantly predict my target variable. Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model 3 16.64439 5.54813 3.93 0.0142 Error 45 63.48602 1.41080 Corrected Total 48 80.13041 Root MSE 1.18777 R-Square 0.2077 Dependent Mean -0.23698 Adj R-Sq 0.1549 Coeff Var -501.20683 Parameter Estimates Parameter Standard Standardized Variable DF Estimate Error t Value Pr > |t| Estimate Intercept 1 -0.06807 0.18098 -0.38 0.7086 0 a 1 3.01517 1.12795 2.67 0.0104 1.66201 b 1 -0.00994 0.36407 -0.03 0.9783 -0.00399 aXb 1 -1.13782 0.37029 -3.07 0.0036 -1.90743 Here are my questions: I am not sure what to make out of this situation. Taken together what does this indicate to me? Also while you are answering the question, could you supplement it with some resources, goog keywords etc for me to learn more surrounding these topics. Thank you so much for your help. A: It seems like you need an introduction to regression. People made book recommendations here. Free book recommendations here. It's hard to make sure you're doing the analysis right when we don't know what the variables are or what the goal is. But based on the output, I can tell you that your second regression specification looks better than your first. I say that because you have two highly significant coefficients, and the adjusted R^2 value took a big jump. Note though, although I consider these important clues, it is not true that models with more significant coefficients or higher adjusted R^2 are consistently better. There are lots of other issues to consider. Your regression models are predicting Y, using a and b. In your second model, the estimated regression equation is -0.06807 + (3.01517 * a) - (0.00994 * b) - (1.13782 ab) In other words, plug in a and b, and you get the models prediction for Y. I could say a lot more, but I'll leave you there and suggest you pick up a textbook. I strongly recommend you try plotting your data. Y with a on the x-axis, Y with b on the x-axis, and a by b as well. A: The two together don't tell you anything more than the second one would alone! The main effects are uninteresting and misleading when there is interaction present. The second model tells you all you need to know. Here are a couple of plots, with R code, to help you understand what that second model looks like... library(lattice) a <- rep(seq(-1.37, 2.12, (2.12--1.37)/9),4) b <- sort(rep(quantile(seq(-1.03, 1.30, .01),c(.2,.4,.6,.8)),10) ) y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b) xyplot(y~a|factor(b)) This one shows the estimated effect of a on y by levels of b. At each level of b, the relationship is positive. This is your significant positive slope for main effect of a in the presence of the interaction a:b. a <- sort(rep(quantile(seq(-1.37, 2.12, .01),c(.2,.4,.6,.8)),10) ) b <- rep(seq(-1.03, 1.30, (1.30--1.03)/9),4) y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b) xyplot(y~b|factor(a)) This is image shows the estimated effects of b on y within levels of a. You can see why you have no significant main effect for b. The direction of the y~b relationship depends on level of a. Thus, no independent relationship (imagine averaging those lines) but a significant interaction (clear pattern when you take into account the level of a) A: You may be interested by this introduction to the linear model (basis of almost any statistical analyses), and linear regression in particular: * *it thoroughly explains lots of the mathematical aspects of linear regression, by detailing all important equations (which is usually left for exercise anywhere else on the Internet); *it uses a simple, yet informative enough, data set as an example; *and it gives all the R commands required to do the computations step by step, as well as plot the results. A: If you want a book specifically on this sort of regression - as opposed to data analysis in general - I recommend Regression Analysis by Example by Chatterjee and Price. Good, not technical, but it doesn't oversimplify.
Q: Explanations of the Euler's continued fractions to compute exponential After looking for explanations of the Euler's continued fractions to compute exponential on internet and after reading Euler's explanations about, I still don't understand how Euler find this continued fraction : $$e=2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{1+\dfrac{1}{4+\dfrac{1}{1+\dfrac{1}{1+\dfrac{1}{6+\ddots}}}}}}}}$$ I understand how Euler get continued fractions to compute squares but not for exponential. Maybe I have missed something, but I really need to understand. So thanks for your help.
Q: How can i know that, what HAL's are to be added in manifest and matrix files for VINTF For VINTF(which is part of project treble), we need to add HAL and its version,transport type e.t.c in the manifest and matrix files. How can i know that what HAL's are to be added to device_manifest.xml, device_compatibility_matrix.xml, framework_manifest.xml and framework_compatibility_matrix.xml. A: All the interfaces you need should be added to the device_manifest and device_framework_compatibility_matrix. They should be the same and just the parent tag is different.
Q: How to make Visual Studio (2019/2022) link to the normal runtime libraries during a debug build? The reason I want to do this is because the debug libraries are littered with extra "assertion" statements that take ages to start with during remote debugging. I hope it's only to replace Multi-threaded Debug DLL (/MDd) with Multi-threaded DLL (/MD) in Code Generation -> Runtime Library but I wonder if there are other changes one has to take into account as well? A: This is doable and also good practice for remote debugging big and complex applications also exlained in Mixing debug and release library/binary - bad practice?. Besides switching link libraries from Multi-threaded Debug DLL (/MDd) to Multi-threaded DLL (/MD) one needs to take into account debug macros like _ITERATOR_DEBUG_LEVEL which might otherwise conflict during linking. A typical error message that indicates such a conflic is error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL' Once all the conflicting macros have been resolved it will link to the standard runtime libraries but the debug symbols for the application remains. Also, @Adrian Mole thanks for the assistans in this matter.
Q: Как прочитать файлы из папки test? У меня есть метод который ищет расположение одной картинки внутри другой. Для его тестирования в папке тест у меня есть директория с тестовыми изображениями. Проблема в том, что пока мне приходится использовать абсолютные пути для получения доступа к картинкам. Понятно что при попытке открыть проект на другой машине тесты упадут так как структура папок изменится. Как мне получить доступ к файлам избежав абсолютных путей? @Test public void whenSubImageNotExistInScreenshotThenReturnPointMinus1() throws IOException { final File screenshot = new File("/Users/pavel/GitHub/project/src/test/java/org/project/util/image/screenshot.png"); final File subImage = new File("/Users/pavel/GitHub/project/src/test/java/org/project/util/image/image.png"); final Point result = ImageMatcher.findImgFragment(subImage, screenshot); System.out.println(result); assertTrue(result.getX() != -1); assertTrue(result.getY() != -1); } Надеюсь так же на конструктивную критику самого теста assertTrue(result.getX() != -1) и идеи держать тестовые изображения в папке test. Остановился на этом так как ничего лучше пока в голову не пришло. Спасибо. A: Вы можете получить путь к директории, из которой запускается приложение, вызвав: Properties prps = System.getProperties(); String path = prps.getProperty("user.dir"); Или конкретный файл при помощи: final File subImage = new File("folder1/folder2/subimage.png"); Если начинать писать не с символа /, то будет искаться заданный файл по относительному пути, а не абсолютному. Надеюсь так же на конструктивную критику самого теста Ну, если получение любого результата, кроме -1 для вашего метода считается успешным прохождением оного, то тест написан правильно. Но я бы проверял конкретный ожидаемый результат (или попадание в него с определённой погрешностью). держать тестовые изображения в папке test Я бы явно обозначил в папке test папку со всеми ресурсами resources. Моя IDE, например, позволяет помечать отдельные директории, как test recources. Apache Maven также приводят такую структуру как стандартную.
Q: Results out of specified datetime range I have an Oracle query as follow Select "customer_name", "partner_name", "process_id", "process_time" from "ptrans" where to_char("process_time",'MM/dd/yy hh24:mi:ss TZH') >='02/01/16 00:00:00' and to_char("process_time",'MM/dd/yy hh24:mi:ss TZH') <='02/29/16 23:59:59' and "partner_name"='TEST'; and the field settings for process_time is TIMESTAMP(3) WITH TIME ZONE and sample of data in it is 18-NOV-16 12.22.19.412 AM -05:00 The issue is that when I execute the following query I get data out of the data say for e.g. 26-JUN-12 07.38.22.000 AM -04:00 which is not part of the query range? What do I need to change in the query ? A: If you try to compare strings, you'll get string comparison semantics. That means that you are asking whether one string is alphabetically before or after another. Since the string '03/01/2001' comes alphabetically after '02/01/2016', that is going to result in a bunch of issues. Presumably, you want to use date or timestamp comparison semantics so that dates in 2016 are later than dates in 2001. I would guess that you'd want where "process_time" >= date '2016-02-01' and "process_time" < date '2016-03-01' and "partner_name"='TEST'; You could use an explicit to_date or a to_timestamp rather than date literals (or timestamp literals) if you would prefer. In your original query, your literals do not have a timestamp. If your process_time is potentially in a different time zone than your database server, comparing against dates may not be what you want. You may well need to use an explicit to_timestamp that includes the time zone that you want the date range to be in. If you want to use an explicit to_timestamp so that you are comparing against values including a time zone where "process_time" >= to_timestamp( '2016-02-01 00:00:00 -05:00', 'YYYY-MM-DD HH24:MI:SS TZH:TZM' ) and "process_time" < to_timestamp( '2016-03-01 00:00:00 -05:00', 'YYYY-MM-DD HH24:MI:SS TZH:TZM' ) and "partner_name"='TEST';
Q: /root/Desktop/SEG3502/BookStoreAppV1/nbproject/build-impl.xml:1045: The module has not been deployed. See the server log for details Can anybody help me with this please? Whenever I run my application, I get the following error : Starting GlassFish Server GlassFish Server is running. In-place deployment at /root/Desktop/SEG3502/Lab3/build/web GlassFish Server, deploy, null, false /root/Desktop/SEG3502/Lab3/nbproject/build-impl.xml:1045: The module has not been deployed. See the server log for details. BUILD FAILED (total time: 58 seconds) A: First undeploy any previous version of your app, stop glassfish server, clean and build the current version and start/deploy again.
Q: Listar registros con fecha mas reciente Tengo lo siguiente List<Pago> listaPago = dbc.pagos .GroupBy(c => c.codigoUsuario) .SelectMany(w => w) .OrderByDescending(f => f.fechaPago) .ToList(); Y me muestra lo siguiente codigoPago codigoUsuario fechaPago 01 Us-01 24/12/2017 02 Us-01 20/11/2017 03 Us-02 22/12/2017 Quiero que el resultado me muestre solo esto 01 Us-01 24/12/2017 03 Us-02 22/12/2017 ...con la fecha mas actual del pago de los usuarios. A: La idea del GroupBy es correcta, pero el SelectMany que le sigue basicamente anula los efectos del GroupBy y causa que devuelva todos los registros. Lo que necesitas después del GroupBy es un Select (con las condiciones correctas) para que cada agrupamiento devuelva un solo objeto Pago: List<Pago> listaPago = dbc.pagos .GroupBy(p => p.codigoUsuario) .Select(g => g.OrderByDescending(p => p.fechaPago).First()) // esta es la parte importante .OrderByDescending(p => p.fechaPago) // aquí puedes ordenar el resultado por lo que quieras .ToList(); A: Es que lo estas ordenando por codigo .GroupBy(c => c.codigoUsuario) deberia ser por fecha .GroupBy(c => c.fecha)
Q: Is this API repository valid or should it be split up into services? I'm currently working on a new PHP RESTfulAPI for a project of mine. It builds on SLIM API 4 uses actions, services, and repositories. However, this architecture is new to me and I have questions that are hard to find good answers to. The API The API has multiple repositories for handling communication with the database, e.g. for users, categories, and companies. However, I recently added a repository to handle uploaded files, and with it, functions like scaling, compressing, and rotating images. But this repository doesn't communicate with any database, it only communicates with another FTP server using SSH2. Question starts here: But is this even a valid repository if it doesn't communicate with a database and has these functions? Should I split up the functionality into multiple services instead? This feels stupid due to the high amount of services it'll require unless I rewrite some of the functionality into a module or something similar. Please let me know your thought on this and if I need to clarify anything. If you have any good reading, please share it with me. A: A repository maps the domain layer to the data access layer, the database. For this reason a FTP/SFTP/FTPS/HTTP etc. client is not a repository.
Q: Why isn't the second HTTPClient working? I have this code: protected void onHandleIntent(Intent intent) { while (true){ long endTime = System.currentTimeMillis() + 5*1000; while (System.currentTimeMillis() < endTime) { synchronized (this) { try { wait(endTime - System.currentTimeMillis()); HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://www.***.***/***/request_sms.php"); String HTML = ""; try { List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2); nameValuePairs.add(new BasicNameValuePair("id", "1")); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HTML = EntityUtils.toString(response.getEntity()); } catch (ClientProtocolException e) {} catch (IOException e) {} if(HTML.indexOf("[NO TEXTS]") > 0) { } else { Vector<String> all_sms = getBetweenAll(HTML, "<sms>", "<sms>"); for(int i = 0, size = all_sms.size(); i < size; i++) { String from = getBetween(all_sms.get(i), "<from>", "</from>"); String to = getBetween(all_sms.get(i), "<to>", "</to>"); String msg = getBetween(all_sms.get(i), "<msg>", "</msg>"); String sent = getBetween(all_sms.get(i), "<sent>", "</sent>"); String HTML1 = ""; HttpClient httpclient1 = new DefaultHttpClient(); HttpPost httppost1 = new HttpPost("http://www.***.***/***/add_sms.php"); try { List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2); nameValuePairs.add(new BasicNameValuePair("from", from)); nameValuePairs.add(new BasicNameValuePair("to", to)); nameValuePairs.add(new BasicNameValuePair("msg", msg)); nameValuePairs.add(new BasicNameValuePair("sent", sent)); httppost1.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response1 = httpclient1.execute(httppost1); HTML1 = EntityUtils.toString(response1.getEntity()); HN.post(new DisplayToast(HTML1)); } catch (ClientProtocolException e) {} catch (IOException e) {} } } } catch (Exception e) { } } } } } This is a service, what I want it to do is every 5 seconds request a page that has the pending SMS messages that the phone needs to send. I am not at the sending part, I just want the HN.Post(DisplayToast(HTML1)) to show up and then I will work. What HTML1 should contain is "success", but I don't get anything. I am sure that HTML does not contain "[NO TEXTS]" as I have tested and it shows the tag with the other tags inside of it. What could be wrong? Here are the functions used: Handler HN = new Handler(); private class DisplayToast implements Runnable { String TM = ""; public DisplayToast(String toast){ TM = toast; } public void run(){ Toast.makeText(getApplicationContext(), TM, Toast.LENGTH_SHORT).show(); } } public String getBetween(String source, String start, String end) { int startindex = source.indexOf(start); int endindex = source.indexOf(end, startindex); String result = source.substring(startindex + start.length(), endindex); return result; } public Vector<String> getBetweenAll(String source, String start, String end) { int startI = 0; Vector<String> result = new Vector<String>(); while (startI + (start.length() + end.length()) < source.length()) { int startindex = source.indexOf(start, startI); if (startI > startindex) { break; } int endindex = source.indexOf(end, startindex); result.add(source.substring(startindex + start.length(), endindex)); startI = endindex; } return result; } A: Use org.apache.http.impl.client.BasicResponseHandler for getting the HTML content. HTML1 = httpclient1.execute(httppost1, new BasicResponseHandler()); HN.post(new DisplayToast(HTML1));
Q: Check for multiple columns with Single Value using SELECT query I have a table fruits. Now in UI, I provide a single field for a search criteria. Using this single Search criteria, i want to search in multiple columns of Fruits Table. Consider Fruits table contains Columns ID,Desc,Price,Quant,Stock. Here Price,Quant are integers and Stock is a varchar. I have tried the below query which returns the results, but i am worried about the performance. Suppose assume user enters 2 in the field provided in UI and clicks on search then query will be as shown below select ID, Desc, Price, Quant, Stock from Fruits where Price = '2' or Quant = '2' or stock = '2' Is this the right way to search for multiple columns of same table? Also will be any effect on performance? A: First, you want to be sure that the types are compatible. In all likelihood these values are numbers, so drop the quotes: select ID, Desc, Price, Quant, Stock from Fruits f where Price = 2 or Quant = 2 or stock = 2; This can more simply be written as: select ID, Desc, Price, Quant, Stock from Fruits f where 2 in (Price, Quant, Stock); but that will not help performance. In most databases your query will require a full table scan -- although some databases support a particular type of index scan called a skip scan which can help. The only way I can think to get around that is to have a separate index on each column: create index idx_fruits_price on fruits(price); create index idx_fruits_quant on fruits(quant, price); create index idx_fruits_stock on fruits(stock, quant, price); (You'll see why the extra columns are helpful.) And then use union all: select ID, Desc, Price, Quant, Stock from Fruits f where Price = 2 union all select ID, Desc, Price, Quant, Stock from Fruits f where quant = 2 and price <> 2 union all select ID, Desc, Price, Quant, Stock from Fruits f where stock = 2 and price <> 2 and stock <> 2; Each of the subqueries can use one of the indexes. Because of the inequalities, the results are exclusive -- assuming the column values are not null. If nulls are allowed, the logic can be adjusted to handle that.
Q: Input type "number", add another option (exception) or custom input type I'm creating a webservice that there are some forms with some inputs. 99% of the inputs will be numbers but in some cases it may be needed to add an X in there. I currently have them at <input type="number+> but is there any way i can customize this so that it will make like an exception for certain chars? like X. I'm more concerned for tablet/phone views and how easy it will be for the user to input those values. I was hoping i wouldn't have to go <input type="text"> Thank you