220
Points
Questions
42
Answers
38
-
Asked on July 17, 2020 in XML.
YoOu need to use
nodes
on your XML data andCROSS APPLY
. This appear to get you what you’re after:USE Sandbox; GO DECLARE @XML xml = '<store> <bookstore> <book> <ref_title> <title>Harry Potter</title> </ref_title> <ref_author> <author>J K. Rowling</author> </ref_author> <year> <this_year>2005</this_year> </year> <price> <dollar>usd</dollar> <value>29.99</value> </price> <price> <dollar>aud</dollar> <value>49.99</value> </price> </book> <book> <ref_title> <title>Petes Book of Pie</title> </ref_title> <ref_author> <author>Pete P</author> </ref_author> <year> <this_year>1999</this_year> </year> <price> <dollar>usd</dollar> <value>19.99</value> </price> <price> <dollar>aud</dollar> <value>39.99</value> </price> </book> </bookstore> </store>'; SELECT bs.b.value('(ref_title/title/text())[1]','nvarchar(50)') AS Title, bs.b.value('(ref_author/author/text())[1]','nvarchar(50)') AS Author, bs.b.value('(year/this_year/text())[1]','nvarchar(50)') AS [Year], p.d.value('(./text())[1]','nvarchar(50)') AS Dollar, p.d.value('(../value/text())[1]','nvarchar(50)') AS [Value] FROM (VALUES(@XML))V(X) CROSS APPLY V.X.nodes('/store/bookstore/book') bs(b) CROSS APPLY bs.b.nodes('price/dollar') p(d) WHERE p.d.value('(./text())[1]','nvarchar(50)') = 'usd';
- 68 views
- 1 answers
- 0 votes
-
Asked on July 17, 2020 in XML.
The template to remove any empty
end_date
would be simply an empty<xsl:template match="end_date[not(node())]"/>
.- 61 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in XML.
Microsoft proprietary
OPENXML
and its companionssp_xml_preparedocument
andsp_xml_removedocument
are kept just for backward compatibility with the obsolete SQL Server 2000. It is strongly recommended to re-write your SQL and switch it to XQuery.Here is another method similar to what is proposed by @AlwaysLearning, but more simple.
It is using XPath predicate instead of
WHERE
clause.SQL
-- DDL and sample data population, start DECLARE @PagosHoy TABLE (ID INT IDENTITY PRIMARY KEY, NumFinca INT, TipoRecibo INT, Fecha DATE); DECLARE @xml XML = N'<Operaciones_por_Dia> <OperacionDia fecha="2020-01-30"> <PagoRecibo TipoRecibo="5" NumFinca="9782331"/> <PagoRecibo TipoRecibo="5" NumFinca="6696849"/> <TransConsumo id="1" LecturaM3="325" descripcion="Cobro Mensual" NumFinca="3336538"/> <TransConsumo id="3" LecturaM3="40" descripcion="Lectura errónea" NumFinca="2425954"/> </OperacionDia> <OperacionDia fecha="2020-04-08"> <PagoRecibo TipoRecibo="7" NumFinca="1423800"/> <PagoRecibo TipoRecibo="7" NumFinca="1393022"/> <TransConsumo id="2" LecturaM3="22" descripcion="Reclamo de cliente" NumFinca="2101885"/> </OperacionDia> </Operaciones_por_Dia>'; -- DDL and sample data population, end DECLARE @FechaOperacion DATE = '2020-01-30'; INSERT @PagosHoy (NumFinca, TipoRecibo, Fecha) SELECT c.value('@NumFinca', 'INT') , c.value('@TipoRecibo', 'INT') , @FechaOperacion AS FechaOperacion FROM @xml.nodes('/Operaciones_por_Dia/OperacionDia[@fecha eq sql:variable("@FechaOperacion")]/PagoRecibo') AS t(c) -- test SELECT * FROM @PagosHoy;
Output
+----+----------+------------+------------+ | ID | NumFinca | TipoRecibo | Fecha | +----+----------+------------+------------+ | 1 | 9782331 | 5 | 2020-01-30 | | 2 | 6696849 | 5 | 2020-01-30 | +----+----------+------------+------------+
- 48 views
- 2 answers
- 0 votes
-
Asked on July 16, 2020 in .NET.
Not exactly sure what you want, but as a start:
create table json_test(id integer, fld_json json); insert into json_test values (1, '[{"a":1,"b":"foo"},{"a":2,"b":"bar"}]'::json), (2, '[{"a":3,"b":"test"},{"a": 4,"b":"test2"}]'::json); select id, a, b from json_test, json_to_recordset(json_test.fld_json) as x(a integer, b varchar); id | a | b ----+---+------- 1 | 1 | foo 1 | 2 | bar 2 | 3 | test 2 | 4 | test2
- 55 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in .NET.
You can do it using the code like this.
PdfPage page = doc.GetPage(doc.GetNumberOfPages()); //Create canvas fro the last page Canvas canvas = new Canvas(page, page.GetPageSize()); //Set fixed position to put the div at the left bottom corner of the canvas div.SetFixedPosition(0, 0, page.getPageSize().getWidth()); canvas.Add(p); doc.Close();
- 47 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in .NET.
This question is a mess but, if I’m getting the gist correctly, you just need to pass a delegate to
InvokeSafe
for a method that interacts with your multiple UI objects. I’m guessing thatInvokeSafe
is an extension method that tests theInvokeRequired
property and calls theInvoke
method of the control you call it on. I have to guess because you haven’t bothered to explain. In that case, it doesn’t matter what control you call it on. The point of those members is to ensure that you execute code on the thread that owns the specific control. That is going to be the same thread for every form and control, so it matters not which you control you use to get to that thread.- 44 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in Mysql.
I can’t address the MySQL Workbench side, but here are four methods for connecting within R, depending on your needs and preferences. All four are based on
DBI
.Setup
For this demonstration, I’ll use docker to grab mysql-5.7 and run it locally. You won’t need to do this, since you already have MySQL running on your readyNAS.
docker pull mysql:5.7 docker run --name mysql57 -e MYSQL_TCP_PORT=7092 -e MYSQL_ROOT_PASSWORD=my-secret-pw -p "7092:7092" -d mysql:5.7
This is now listening on
0.0.0.0
(meaning all of my laptop’s network interfaces) and port7092
. While I can connect below using127.0.0.1
, I’ll use my default IP address, which I’ll show here as196.168.1.22
(replace this with your readyNAS IP address).serverip <- "192.168.1.22"
Lastly, I hope you aren’t using my password here on an internet-facing device 🙂
1.
RMySQL
This has the advantage of being the most direct. If this one won’t work for you, then I don’t think any of the others will, so I suggest you start with this.
library(DBI) con <- dbConnect(RMySQL::MySQL(), user = "root", password = "my-secret-pw", server = serverip, port = 7092, dbname = "mysql") dbWriteTable(con, "mt", mtcars) # [1] TRUE dbGetQuery(con, "select * from mt limit 2") # row_names mpg cyl disp hp drat wt qsec vs am gear carb # 1 Mazda RX4 21 6 160 110 3.9 2.620 16.46 0 1 4 4 # 2 Mazda RX4 Wag 21 6 160 110 3.9 2.875 17.02 0 1 4 4 dbDisconnect(con) # [1] TRUE
(I hope the SQL gurus will be kind: I recognize that
select * from
is sloppy, and one should strive to name columns directly and intentionally. Really, using it has bitten me in the past, so I speak from experience. Having said that, for brevity I’ll keep using it here.)2.
pool
andRMySQL
The
pool
package features connection-management which, among other things, provides automatic reconnects if/when connections time out (and they do, for a plethora of reasons). This provides a little bit of comfort and convenience to "regular users", but if you’re thinking about anyshiny
development, I consider this package "mandatory".library(pool) pcon <- dbPool(RMySQL::MySQL(), user = "root", password = "my-secret-pw", server = serverip, port = 7092, dbname = "mysql") dbGetQuery(pcon, "select * from mt limit 2") poolClose(pcon)
Some operations may not work with the
pcon
object, in which case you will need to manually "check out" a connection, use it, then return it to the pool.con <- poolCheckout(pcon) # ... poolReturn(con)
For details about the "pool" of connections, see their docs.
3.
odbc
Some database setups prefer to keep the configuration of a connection (including server ip, port, and default database) controlled centrally (either a personal or system-wide odbc configuration). I’ve never seen an odbc connection that is faster than one with the native driver (often it is slower), but for most users the slowdown is not noticeable (I believe).
To do this, your OS must have the odbc drivers installed for MySQL. I grabbed them from https://dev.mysql.com/downloads/connector/odbc/. Windows was "just this", though linux will require other libraries installed (perhaps
unixodbc
among others), MacOS could be something else (sorry, I don’t know off the top of my head).library(odbc) odbc::odbcListDrivers() # name attribute value ### ...truncated... # 46 MySQL ODBC 8.0 ANSI Driver UsageCount 1 # 47 MySQL ODBC 8.0 Unicode Driver UsageCount 1 con <- dbConnect(odbc::odbc(), driver = "MySQL ODBC 8.0 ANSI Driver", user = "root", password = "my-secret-pw", server = serverip, port = 7092, database = "mysql") # dbname --> database dbGetQuery(con, "select * from mt limit 2") # row_names mpg cyl disp hp drat wt qsec vs am gear carb # 1 Mazda RX4 21 6 160 110 3.9 2.620 16.46 0 1 4 4 # 2 Mazda RX4 Wag 21 6 160 110 3.9 2.875 17.02 0 1 4 4 dbDisconnect(con)
Note that the odbc connection changes from
dbname=
todatabase=
. (For perspective, many DBMS connections use different labels for other fields likeuid=
/user=
,pwd=
/password=
, so … consider yourself lucky you just have one different field 🙂4.
pool
andodbc
Nothing cosmic here, if you need ODBC and are deploying, then this is for you. If you need/prefer ODBC and like have auto-reconnects, this is also good.
pcon <- dbPool(odbc::odbc(), driver = "MySQL ODBC 8.0 ANSI Driver", user = "root", password = "my-secret-pw", server = serverip, port = 7092, database = "mysql") # dbname --> database dbGetQuery(pcon, "select * from mt limit 2") # row_names mpg cyl disp hp drat wt qsec vs am gear carb # 1 Mazda RX4 21 6 160 110 3.9 2.620 16.46 0 1 4 4 # 2 Mazda RX4 Wag 21 6 160 110 3.9 2.875 17.02 0 1 4 4 poolClose(pcon)
PS: the
RMySQL
docs say that it can benefit from~/.my.cnf
(MySQL configuration) files. I haven’t tested this.- 44 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in Mysql.
Finally i figured it out.
UPDATE customer c JOIN address a ON c.Raw_Address LIKE CONCAT('%', a.city ,'%') SET c.city=a.City, c.state=a.State, c.Country=a.country, c.`Zip code`=a.`Zip code`
- 45 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in Mysql.
No, you can’t do that.
Mysqldump basically does this:
for each table { SELECT * FROM $table WHERE $where; }
It only reads tables one at a time, not doing joins. So there is no way the WHERE clause you give in the arguments can reference more than one table. It also means the WHERE clause you give must be applicable to every table dumped.
I was able to trick it using a subquery, but this is limited to dumping one table at a time.
I created tables:
create table test.parent (id int auto_increment primary key); create table test.child (parent_id int, foreign key (parent_id) references test.parent(id));
Put rows into parent and just one row into child:
INSERT INTO parent (id) VALUES (1),(2),(3); INSERT INTO child (parent_id) VALUES (1);
Now try to dump
parent
:mysqldump test parent ... INSERT INTO `parent` VALUES (1),(2),(3);
That works to dump all rows from one table.
We can’t do a join, but can we use something in the WHERE clause to make it check the
child
table, so we only get rows that match a row fromchild
?mysqldump test parent --where="exists(select * from child where parent_id=parent.id)" ... mysqldump: Couldn't execute 'SELECT /*!40001 SQL_NO_CACHE */ * FROM `parent` WHERE exists(select * from child where parent_id=parent.id)': Table 'child' was not locked with LOCK TABLES (1100)
Whoops? Can we make it not require locks?
mysqldump --help ... -l, --lock-tables Lock all tables for read. (Defaults to on; use --skip-lock-tables to disable.)
Yes!
mysqldump --skip-lock-tables test parent --where="exists(select * from child where parent_id=parent.id)" ... INSERT INTO `parent` VALUES (1);
So now it dumps data from
parent
but limits the rows to the one that has a matching row inchild
.- 50 views
- 1 answers
- 0 votes
-
Asked on July 16, 2020 in Python.
The attached json is not correct. I have corrected it and assume it’s correct
a = [ { "InvoiceNumberPrefix": "AB-", "InvoiceNumber": 1111, "OrderItemList": [ { "CatalogID": 1, "ItemIndexID": 1, "ItemID": "444444" }, { "CatalogID": 1, "ItemIndexID": 1, "ItemID": "666666" } ] } ] print([j["ItemID"] for i in a for j in i["OrderItemList"]])
Output:
['444444', '666666']
- 40 views
- 2 answers
- 0 votes