13
13
1 CALL Syntax
CALL sp_name([parameter[,...]])
CALL sp_name[()]
The CALL statement invokes a stored procedure that was defined previously with CREATE
PROCEDURE.
Stored procedures that take no arguments can be invoked without parentheses. That is, CALL
p() and CALL p are equivalent.
CALL can pass back values to its caller using parameters that are declared
as OUT or INOUT parameters. When the procedure returns, a client program can also obtain the
number of rows affected for the final statement executed within the routine: At the SQL level, call
the ROW_COUNT() function; from the C API, call themysql_affected_rows() function.
To get back a value from a procedure using an OUT or INOUT parameter, pass the parameter by
means of a user variable, and then check the value of the variable after the procedure returns. (If
you are calling the procedure from within another stored procedure or function, you can also pass a
routine parameter or local routine variable as an IN or INOUT parameter.) For an INOUT parameter,
initialize its value before passing it to the procedure. The following procedure has an OUT parameter
that the procedure sets to the current server version, and an INOUT value that the procedure
increments by one from its current value:
CREATE PROCEDURE p (OUT ver_param VARCHAR(25), INOUT incr_param INT)
BEGIN
# Set value of OUT parameter
SELECT VERSION() INTO ver_param;
# Increment value of INOUT parameter
SET incr_param = incr_param + 1;
END;
Before calling the procedure, initialize the variable to be passed as the INOUT parameter. After
calling the procedure, the values of the two variables will have been set or modified:
mysql> SET @increment = 10;
mysql> CALL p(@version, @increment);
mysql> SELECT @version, @increment;
+------------------+------------+
| @version | @increment |
+------------------+------------+
| 5.7.20-debug-log | 11 |
+------------------+------------+
In prepared CALL statements used with PREPARE and EXECUTE, placeholders can be used
for IN parameters, OUT, and INOUT parameters. These types of parameters can be used as follows:
mysql> SET @increment = 10;
mysql> PREPARE s FROM 'CALL p(?, ?)';
mysql> EXECUTE s USING @version, @increment;
mysql> SELECT @version, @increment;
+------------------+------------+
| @version | @increment |
+------------------+------------+
| 5.7.20-debug-log | 11 |
+------------------+------------+
To write C programs that use the CALL SQL statement to execute stored procedures that produce
result sets, the CLIENT_MULTI_RESULTS flag must be enabled. This is because each CALL returns a
result to indicate the call status, in addition to any result sets that might be returned by statements
executed within the procedure. CLIENT_MULTI_RESULTS must also be enabled if CALL is used to
execute any stored procedure that contains prepared statements. It cannot be determined when
such a procedure is loaded whether those statements will produce result sets, so it is necessary to
assume that they will.
CLIENT_MULTI_RESULTS can be enabled when you call mysql_real_connect(), either explicitly
by passing the CLIENT_MULTI_RESULTS flag itself, or implicitly by
passing CLIENT_MULTI_STATEMENTS (which also
enables CLIENT_MULTI_RESULTS). CLIENT_MULTI_RESULTS is enabled by default.
To process the result of a CALL statement executed
using mysql_query() or mysql_real_query(), use a loop that calls mysql_next_result() to
determine whether there are more results. For an example, see Section 27.8.16, C API Multiple
Statement Execution Support.
C programs can use the prepared-statement interface to execute CALL statements and
access OUT and INOUT parameters. This is done by processing the result of a CALL statement using
a loop that calls mysql_stmt_next_result() to determine whether there are more results. For an
example, see Section 27.8.18, C API Prepared CALL Statement Support. Languages that provide
a MySQL interface can use prepared CALL statements to directly retrieve OUT and INOUTprocedure
parameters.
Metadata changes to objects referred to by stored programs are detected and cause automatic
reparsing of the affected statements when the program is next executed. For more information,
see Section 8.10.4, Caching of Prepared Statements and Stored Programs.
Partitioned Tables
DELETE supports explicit partition selection using the PARTITION option, which takes a list of the
comma-separated names of one or more partitions or subpartitions (or both) from which to select
rows to be dropped. Partitions not included in the list are ignored. Given a partitioned table t with a
partition named p0, executing the statement DELETE FROM t PARTITION (p0) has the same
effect on the table as executing ALTER TABLE t TRUNCATE PARTITION (p0); in both cases, all
rows in partition p0 are dropped.
PARTITION can be used along with a WHERE condition, in which case the condition is tested only on
rows in the listed partitions. For example, DELETE FROM t PARTITION (p0) WHERE c <
5 deletes rows only from partition p0 for which the condition c < 5 is true; rows in any other
partitions are not checked and thus not affected by the DELETE.
The PARTITION option can also be used in multiple-table DELETE statements. You can use up to
one such option per table named in the FROM option.
For more information and examples, see Section 22.5, Partition Selection.
Auto-Increment Columns
If you delete the row containing the maximum value for an AUTO_INCREMENT column, the value is
not reused for a MyISAM or InnoDB table. If you delete all rows in the table with DELETE
FROM tbl_name (without a WHERE clause) in autocommit mode, the sequence starts over for all
storage engines except InnoDB and MyISAM. There are some exceptions to this behavior
for InnoDB tables, as discussed in Section 14.8.1.5, AUTO_INCREMENT Handling in InnoDB.
For MyISAM tables, you can specify an AUTO_INCREMENT secondary column in a multiple-column
key. In this case, reuse of values deleted from the top of the sequence occurs even
for MyISAM tables. See Section 3.6.9, Using AUTO_INCREMENT.
Modifiers
The DELETE statement supports the following modifiers:
If you specify LOW_PRIORITY, the server delays execution of the DELETE until no other clients
are reading from the table. This affects only storage engines that use only table-level locking
(such as MyISAM, MEMORY, and MERGE).
For MyISAM tables, if you use the QUICK modifier, the storage engine does not merge index
leaves during delete, which may speed up some kinds of delete operations.
The IGNORE modifier causes MySQL to ignore errors during the process of deleting rows.
(Errors encountered during the parsing stage are processed in the usual manner.) Errors that
are ignored due to the use of IGNORE are returned as warnings. For more information,
see Comparison of the IGNORE Keyword and Strict SQL Mode.
Order of Deletion
If the DELETE statement includes an ORDER BY clause, rows are deleted in the order specified by the
clause. This is useful primarily in conjunction with LIMIT. For example, the following statement finds
rows matching the WHERE clause, sorts them by timestamp_column, and deletes the first (oldest)
one:
DELETE FROM somelog WHERE user = 'jcole'
ORDER BY timestamp_column LIMIT 1;
ORDER BY also helps to delete rows in an order required to avoid referential integrity violations.
InnoDB Tables
If you are deleting many rows from a large table, you may exceed the lock table size for
an InnoDB table. To avoid this problem, or simply to minimize the time that the table remains locked,
the following strategy (which does not use DELETE at all) might be helpful:
1. Select the rows not to be deleted into an empty table that has the same structure as the original
table:
INSERT INTO t_copy SELECT * FROM t WHERE ... ;
2. Use RENAME TABLE to atomically move the original table out of the way and rename the copy to
the original name:
RENAME TABLE t TO t_old, t_copy TO t;
3. Drop the original table:
3. Delete a block of rows at the low end of the column range using DELETE QUICK.
In this scenario, the index blocks associated with the deleted index values become underfilled but
are not merged with other index blocks due to the use of QUICK. They remain underfilled when new
inserts occur, because new rows do not have index values in the deleted range. Furthermore, they
remain underfilled even if you later use DELETE without QUICK, unless some of the deleted index
values happen to lie in index blocks within or adjacent to the underfilled blocks. To reclaim unused
index space under these circumstances, use OPTIMIZE TABLE.
If you are going to delete many rows from a table, it might be faster to use DELETE QUICK followed
by OPTIMIZE TABLE. This rebuilds the index rather than performing many index block merge
operations.
Multi-Table Deletes
You can specify multiple tables in a DELETE statement to delete rows from one or more tables
depending on the condition in the WHERE clause. You cannot use ORDER BY or LIMIT in a multiple-
table DELETE. The table_references clause lists the tables involved in the join, as described
in Section 13.2.9.2, JOIN Syntax.
For the first multiple-table syntax, only matching rows from the tables listed before the FROM clause
are deleted. For the second multiple-table syntax, only matching rows from the tables listed in
the FROM clause (before the USING clause) are deleted. The effect is that you can delete rows from
many tables at the same time and have additional tables that are used only for searching:
DELETE t1, t2 FROM t1 INNER JOIN t2 INNER JOIN t3
WHERE t1.id=t2.id AND t2.id=t3.id;
Or:
User Comments
Posted by Chris Rywalt on January 29, 2004
I spent an hour or so working out how to delete rows matching a specific SELECT statement which was mildly
complex:
(Basically, I had accidentally created two usernames for each ID, the extra username ending in 2. But there
were some valid usernames ending in 2 which I didn't want to delete.)
I tried several different approaches to crafting a delete statement to get rid of these, all to no avail. I tried
DELETE...WHERE IN...SELECT and DELETE...WHERE...= ANY...SELECT, WHERE EXISTS, and several
other variations, all of which looked like they should work according to the manual, but none of which did.
Finally -- hence this comment, so you don't have to jump through my hoops -- my DBA wife and I put together
this solution:
Maybe this isn't the best way to do this, but it worked for me. Hope it helps someone else.
I had a many-to-many relational table that joined users and events. Some users might save the same event
more than once...so I wanted to know a way to delete duplicate entries. The table has a primary key "ueventID"
(auto-increment) and two foreign keys "userID" and "eventID". In order to delete duplicate entries, I found that
this solution worked quite well for me.
DELETE t1 FROM tbl_name t1, tbl_name t2 WHERE t1.userID=t2.userID AND t1.eventID=t2.eventID AND
t1.ueventID < t2.ueventID
This will delete all but the very last entry of the duplicates. If there are any better ways to do this, feel free to let
me know. I'll try to remember to check back later.
Honestly, though, while I wanted to know how to do this...officially, I just check to see if it's a duplicate entry
BEFORE I insert it so that I don't have to hassle with this :-P
2) Use ALTER IGNORE TABLE and add an index for the duplicate column(s). Given this table (without primary
key):
+---+
| a |
+---+
| 1 |
| 1 |
| 2 |
| 2 |
| 3 |
+---+
Do this:
While it is documented in these pages, it takes a bit of hunting to confirm this incompatible change in v3.23 to
v4.1:
If you delete all rows from a table with DELETE FROM tablename, then add some new rows with INSERT
INTO tablename, an AUTO_INCREMENT field would start again from 1 using MySQL v3.23.
However, with MyISAM tables with MySQL v4.1, the auto increment counter isn't reset back to 1 - even if you
do OPTIMIZE tablename. You have to do TRUNCATE tablename to delete all rows in order to reset the auto
increment counter.
This can cause problems because your auto increment counter gets higher and higher each time you do a
DELETE all/INSERT new data cycle.
It's probably worth to mention that DELETE FROM doesn't use the same isolation level in transaction as
SELECT. Even if you set isolation level as REPEATABLE READ it doesn't change DELETE behaviour which
work as READ COMMITTED. (it affects InnoDB engine in MySQL4 and MySQL5)
Here is an example:
| User A User B
|
| SET AUTOCOMMIT=0; SET AUTOCOMMIT=0;
|
| SELECT * FROM t;
| empty set
| INSERT INTO t VALUES (1, 2);
|
| SELECT * FROM t;
| empty set
| COMMIT;
|
| SELECT * FROM t;
| empty set
|
| SELECT * FROM t;
| ---------------------
| | 1 | 2 |
| ---------------------
| 1 row in set
|
| DELETE FROM t;
| Query OK,
| 1 row affected
|// ^ it delets rows from
|// outside it's transaction
|
| COMMIT;
|
| SELECT * FROM t;
| empty set
Keywords: ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails...
I think this is a good practice to do when you're designing a database that has lots of foreign keys. If you have
tables with ON DELETE CASCADE option which are linked with other field to other tables, the delete cascade
option will fail (because mysql could not delete in the same order you create the tables) with the "ERROR 1452
(23000)". A solution for this case is to declare a clause ON DELETE SET NULL in the others foreign keys. An
example:
|-------------------------------------------------------
| mysql> CREATE TABLE a(
| -> id INT AUTO_INCREMENT, user VARCHAR(20), PRIMARY KEY(id)) ENGINE=InnoDB;
| Query OK, 0 rows affected (0.08 sec)
|
| mysql> CREATE TABLE b(
| -> id INT AUTO_INCREMENT, id_a INT, name VARCHAR(20), PRIMARY KEY(id),
| -> FOREIGN KEY(id_a) REFERENCES a(id) ON DELETE CASCADE) ENGINE=InnoDB;
| Query OK, 0 rows affected (0.08 sec)
|
| mysql> CREATE TABLE c(
| -> id INT AUTO_INCREMENT, id_a INT, id_b INT, lastName VARCHAR(20), PRIMARY
KEY(id),
| -> FOREIGN KEY(id_a) REFERENCES a(id) ON DELETE CASCADE,
| -> FOREIGN KEY(id_b) REFERENCES b(id)) ENGINE=InnoDB;
| Query OK, 0 rows affected (0.08 sec)
|
| mysql> INSERT INTO a(user) VALUES('zerocool');
| Query OK, 1 row affected (0.06 sec)
|
| mysql> INSERT INTO b(id_a,name) VALUES(1,'carl');
| Query OK, 1 row affected (0.06 sec)
|
| mysql> INSERT INTO c(id_a,id_b,lastName) VALUES(1,1,'anderson');
| Query OK, 1 row affected (0.06 sec)
|
| mysql> DELETE FROM a WHERE user='zerocool';
| ERROR 1451 (23000): Cannot delete or update a parent row:
| a foreign key constraint fails (`apk_zupca/c`, CONSTRAINT
| `c_ibfk_2` FOREIGN KEY (`id_b`) REFERENCES `b` (`id`))
At this point, the ON DELETE CASCADE is failing because the child table (b) has another FOREIGN KEY (c is
linked with b, so row in b can't be deleted). We have created the tables in the correct order, but mysql is trying
to delete rows in the order we've created the tables and it's the wrong way. A solution could be the ON
DELETE SET NULL. We should add this clause during the creation of the table (or ALTER, if the table is
already created):
Hope be helpful
I found a fast way to delete a small subset of rows in a very big table (hundreds of thousands or millions):
You will need the to be deleted IDs in a temporary table which you might already have and you want to delete
only those IDs:
The trick is, that INNER JOIN will 'shrink' the LargeTable down to the size of the TemporarySmallTable and the
delete will operate on that smaller set only, since USING will reference to the joined table.
The Error code was 1093 and explanation was "You can't specify target table 'm' for update in FROM clause".
The problem was that members(alias m) table is both the table that i wanted to delete and exists in inner
statement. I fund the solution with the temporary table.
Delete all values in a table including auto increment values using following example
mysql>truncate tablename;
by
Deepu Surendran VS
OCS Technopark
+----------------+---------------------------+
| project_number | description |
+----------------+---------------------------+
| 06/XC/083 | Membrane Bioreactors |
| 06/XC/083 | Membrane bioreactors |
+----------------+---------------------------+
Instead, to just delete a single entry and leave the other use the LIMIT clause:
DELETE FROM tbl_projects WHERE project_number = '06/XC/083' AND description = 'Membrane Bioreactors'
LIMIT 1;
"The IGNORE keyword causes MySQL to ignore all errors during the process of deleting rows. (Errors
encountered during the parsing stage are processed in the usual manner.) Errors that are ignored due to the
use of IGNORE are returned as warnings."
That's not true for ERROR 1451. On foreign key constraint DELETE IGNORE blocks (in 5.0).
Sign UpLoginYou must be logged in to post a comment.
13.2.3 DO Syntax
DO expr [, expr] ...
DO executes the expressions but does not return any results. In most respects, DO is shorthand
for SELECT expr, ..., but has the advantage that it is slightly faster when you do not care about
the result.
DO is useful primarily with functions that have side effects, such as RELEASE_LOCK().
Example: This SELECT statement pauses, but also produces a result set:
mysql> SELECT SLEEP(5);
+----------+
| SLEEP(5) |
+----------+
| 0 |
+----------+
1 row in set (5.02 sec)
DO, on the other hand, pauses without producing a result set.:
mysql> DO SLEEP(5);
Query OK, 0 rows affected (4.99 sec)
This could be useful, for example in a stored function or trigger, which prohibit statements that
produce result sets.
DO only executes expressions. It cannot be used in all cases where SELECT can be used. For
example, DO id FROM t1 is invalid because it references a table.
The handler interface does not have to provide a consistent look of the data (for
example, dirty reads are permitted), so the storage engine can use optimizations
that SELECT does not normally permit.
HANDLER makes it easier to port to MySQL applications that use a low-level ISAM-like interface.
(See Section 14.20, InnoDB memcached Plugin for an alternative way to adapt applications
that use the key-value store paradigm.)
HANDLER enables you to traverse a database in a manner that is difficult (or even impossible) to
accomplish with SELECT. The HANDLER interface is a more natural way to look at data when
working with applications that provide an interactive user interface to the database.
HANDLER is a somewhat low-level statement. For example, it does not provide consistency. That
is, HANDLER ... OPEN does not take a snapshot of the table, and does not lock the table. This
means that after a HANDLER ... OPEN statement is issued, table data can be modified (by the
current session or other sessions) and these modifications might be only partially visible to HANDLER
... NEXT or HANDLER ... PREV scans.
An open handler can be closed and marked for reopen, in which case the handler loses its position
in the table. This occurs when both of the following circumstances are true:
Any session executes FLUSH TABLES or DDL statements on the handler's table.
The session in which the handler is open executes non-HANDLER statements that use tables.
TRUNCATE TABLE for a table closes all handlers for the table that were opened with HANDLER OPEN.
If a table is flushed with FLUSH TABLES tbl_name WITH READ LOCK was opened with HANDLER,
the handler is implicitly flushed and loses its position.
INSERT [LOW_PRIORITY |
HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
[PARTITION (partition_name [,
partition_name] ...)]
[(col_name [, col_name] ...)]
SELECT ...
[ON DUPLICATE KEY UPDATE
assignment_list]
value:
{expr | DEFAULT}
value_list:
value [, value] ...
assignment:
col_name = value
assignment_list:
assignment [, assignment] ...
INSERT inserts new rows into an existing table. The INSERT ... VALUES and INSERT ...
SET forms of the statement insert rows based on explicitly specified values. The INSERT ...
SELECT form inserts rows selected from another table or tables. INSERT with an ON DUPLICATE
KEY UPDATE clause enables existing rows to be updated if a row to be inserted would cause a
duplicate value in a UNIQUE index or PRIMARY KEY.
For additional information about INSERT ... SELECT and INSERT ... ON DUPLICATE KEY
UPDATE, see Section 13.2.5.1, INSERT ... SELECT Syntax, andSection 13.2.5.2, INSERT ... ON
DUPLICATE KEY UPDATE Syntax.
In MySQL 5.7, the DELAYED keyword is accepted but ignored by the server. For the reasons for this,
see Section 13.2.5.3, INSERT DELAYED Syntax,
Inserting into a table requires the INSERT privilege for the table. If the ON DUPLICATE KEY
UPDATE clause is used and a duplicate key causes an UPDATE to be performed instead, the
statement requires the UPDATE privilege for the columns to be updated. For columns that are read
but not modified you need only the SELECT privilege (such as for a column referenced only on the
right hand side of an col_name=expr assignment in an ON DUPLICATE KEY UPDATE clause).
When inserting into a partitioned table, you can control which partitions and subpartitions accept new
rows. The PARTITION option takes a list of the comma-separated names of one or more partitions or
subpartitions (or both) of the table. If any of the rows to be inserted by a given INSERT statement do
not match one of the partitions listed, the INSERT statement fails with the error Found a row not
matching the given partition set. For more information and examples, see Section 22.5,
Partition Selection.
You can use REPLACE instead of INSERT to overwrite old rows. REPLACE is the counterpart
to INSERT IGNORE in the treatment of new rows that contain unique key values that duplicate old
rows: The new rows replace the old rows rather than being discarded. See Section 13.2.8,
REPLACE Syntax.
tbl_name is the table into which rows should be inserted. Specify the columns for which the
statement provides values as follows:
Provide a parenthesized list of comma-
separated column names following the
table name. In this case, a value for
each named column must be provided
by the VALUES list or
the SELECT statement.
If you do not specify a list of column
names for INSERT ...
VALUES or INSERT ... SELECT,
values for every column in the table
must be provided by the VALUES list or
the SELECT statement. If you do not
know the order of the columns in the
table, use DESCRIBE tbl_name to find
out.
A SET clause indicates columns
explicitly by name, together with the
value to assign each one.
Column values can be given in several ways:
If INSERT inserts a row into a table that has an AUTO_INCREMENT column, you can find the value
used for that column by using the LAST_INSERT_ID() SQL function or the mysql_insert_id() C
API function.
Note
These two functions do not always behave identically. The behavior of INSERT statements with
respect to AUTO_INCREMENT columns is discussed further in Section 12.14, Information Functions,
and Section 27.8.7.38, mysql_insert_id().
The INSERT statement supports the following modifiers:
If you use the LOW_PRIORITY modifier,
execution of the INSERT is delayed
until no other clients are reading from
the table. This includes other clients
that began reading while existing
clients are reading, and while
the INSERT LOW_PRIORITY statement
is waiting. It is possible, therefore, for a
client that issues an INSERT
LOW_PRIORITY statement to wait for a
very long time.
LOW_PRIORITY affects only storage
engines that use only table-level
locking (such as MyISAM, MEMORY,
and MERGE).
Note
LOW_PRIORITY should
normally not be used
with MyISAM tables because
doing so disables concurrent
inserts. See Section 8.11.3,
Concurrent Inserts.
If you specify HIGH_PRIORITY, it
overrides the effect of the --low-
priority-updates option if the
server was started with that option. It
also causes concurrent inserts not to
be used. See Section 8.11.3,
Concurrent Inserts.
HIGH_PRIORITY affects only storage
engines that use only table-level
locking (such as MyISAM, MEMORY,
and MERGE).
If you use the IGNORE modifier, errors
that occur while executing
the INSERT statement are ignored. For
example, without IGNORE, a row that
duplicates an existing UNIQUE index
or PRIMARY KEY value in the table
causes a duplicate-key error and the
statement is aborted. With IGNORE, the
row is discarded and no error occurs.
Ignored errors generate warnings
instead.
IGNORE has a similar effect on inserts
into partitioned tables where no
partition matching a given value is
found. Without IGNORE,
such INSERT statements are aborted
with an error. When INSERT
IGNORE is used, the insert operation
fails silently for rows containing the
unmatched value, but inserts rows that
are matched. For an example,
see Section 22.2.2, LIST Partitioning.
Data conversions that would trigger
errors abort the statement if IGNORE is
not specified. With IGNORE, invalid
values are adjusted to the closest
values and inserted; warnings are
produced but the statement does not
abort. You can determine with
the mysql_info() C API function how
many rows were actually inserted into
the table.
For more information, see Comparison
of the IGNORE Keyword and Strict
SQL Mode.
If you specify ON DUPLICATE KEY
UPDATE, and a row is inserted that
would cause a duplicate value in
a UNIQUE index or PRIMARY KEY,
an UPDATE of the old row occurs. The
affected-rows value per row is 1 if the
row is inserted as a new row, 2 if an
existing row is updated, and 0 if an
existing row is set to its current values.
If you specify
the CLIENT_FOUND_ROWS flag to
the mysql_real_connect() C API
function when connecting to mysqld,
the affected-rows value is 1 (not 0) if
an existing row is set to its current
values. See Section 13.2.5.2, INSERT
... ON DUPLICATE KEY UPDATE
Syntax.
INSERT DELAYED was deprecated in
MySQL 5.6, and is scheduled for
eventual removal. In MySQL 5.7,
the DELAYED modifier is accepted but
ignored.
Use INSERT (without DELAYED)
instead. See Section 13.2.5.3,
INSERT DELAYED Syntax.
An INSERT statement affecting a partitioned table using a storage engine such as MyISAM that
employs table-level locks locks only those partitions into which rows are actually inserted. (For
storage engines such as InnoDB that employ row-level locking, no locking of partitions takes place.)
For more information, seeSection 22.6.4, Partitioning and Locking.
Here's an example:
insert into Citylist (cityname) VALUES ('St. John\'s')
Table logs:
id: INT(11) auto_increment primary key
site_id: INT(11)
time: DATE
hits: INT(11)
Then:
CREATE UNIQUE INDEX comp ON logs (`site_id`, `time`);
Excellent feature, and it is much faster and briefer then using first a select, then issuing either an update or an
insert depending on the value of the select. You also get rid of the probably necessary table-lock during this
action.
For example, if you insert 5 rows with this syntax, and 3 of them were inserted while 2 were updated, the return
value would be 7:
((3 inserts * 1) + (2 updates * 2)) = 7.
The return value may at first appear worrisome, as only 5 rows in the table were actually modified, but actually
provides more information, because you can determine the quantities of each query type performed from the
return value.
Le code suivant permet de crer une nouvelle table appele "fusion" avec les champs partition en, classe,
segment, F tot, F loc et indice specif.
On peut mettre la suite de ce code, le code suivant autant de fois que voulu qui permet de fusionner les
tables dans la nouvelle table "fusion":
http://dev.mysql.com/doc/mysql/en/Counting_rows.html
If you know another way when inserting several files with almost the same data (cat dog turtle + cat dog parrot=
cat dog turtle parrot) and avoid repetition, tell it please?
If you do this then SELECT LAST_INSERT_ID() will return either the inserted id or the updated id.
CREATE TABLE t1 (
a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
b VARCHAR(10)) TYPE=InnoDB;
CREATE TABLE t2 (
a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
b INT NOT NULL,
FOREIGN KEY (b) REFERENCES t1 (a),
c VARCHAR(15)) TYPE=InnoDB;
We can INSERT rows into t2 that populate the foreign key column based on a SELECT statement on t1:
Then we get:
mysql> SELECT * FROM t2;
+---+---+-------------+
| a | b | c |
+---+---+-------------+
| 1 | 2 | shoulder |
| 2 | 2 | old block |
| 3 | 3 | toilet |
| 4 | 3 | long,silver |
| 5 | 3 | li'l |
+---+---+-------------+
5 rows in set (0.00 sec)
This is especially useful if you don't want to specify the ids for your rows (because they may differ from
database to database, due to their being based on AUTO_INCREMENTs), but you want to refer to the values
of other tables.
I haven't tested this to determine the version of MySQL this was introduced into, or whether it is necessary that
the tables be InnoDB, but it works on my boxes (MySQL 4.1.12)
2) If you don't have an auto-incrementation in the two tables (tableB for exemple):
ALTER TABLE tableB ADD (id INT AUTO_INCREMENT NOT NULL, PRIMARY KEY(id));
This looks quite simple but it took me several hours to understand that there's no need for a special statement
to handle such cases.
Regards!
hth,
Lokar
Cheers, al.
<?php
$myi = new mysqli("localhost", "user", "pass", "dbname");
$myi->query( <<<SQL_CREATE
create temporary table test_warnings
(
`id_` int(11) NOT NULL,
`num_` int(11) default NULL,
PRIMARY KEY (`id_`)
);
SQL_CREATE
);
$sth=$myi-
>prepare("insert ignore into test_warnings (id_, num_) values (?,?)");
$id = 9;
$num = 1;
for( $i=0; $i<2; $i++ )
{
$sth->bind_param( "ii", $id, $num );
$sth->execute();
$r = $myi->affected_rows;
print "r==$r\n<br>";
$sth->reset;
}
$sth->close();
?>
This record will not be inserted as the username is already in the database other fields can be used.
Regards,
Elliot
http://www.sioure.com
CREATE TABLE t1 (
a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
b INT NOT NULL,
c VARCHAR(15)) ENGINE=ndbluster;
CREATE TABLE t2 (
a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
b INT NOT NULL,
c VARCHAR(15)) ENGINE=ndbluster;
int b=0;
while (b<1000) {
for (int x=0;x<10; x++) {
INSERT INTO t2(b,c) VALUES ($b,Node 4);
b++;
sleep(1);
}
INSERT INTO t1(b,c) SELECT (b,c) FROM t2;
DELETE FROM t2;
}
This will result in holes that are backfilled in t1. After a run, this would be the first 100 rows of
SELECT * FROM t1 ORDER BY a;
---------------------------------
| a |b |c |
---------------------------------
|0 |0 |Node 3 |
|1 |1 |Node 3 |
|2 |2 |Node 3 |
|3 |3 |Node 3 |
|4 |4 |Node 3 |
|5 |5 |Node 3 |
|6 |6 |Node 3 |
|7 |7 |Node 3 |
|8 |8 |Node 3 |
|9 |9 |Node 3 |
|10 |0 |Node 4 |
|11 |1 |Node 4 |
| 12 |2 |Node 4 |
|13 |3 |Node 4 |
|14 |4 |Node 4 |
|15 |5 |Node 4 |
|16 |6 |Node 4 |
|17 |7 |Node 4 |
|18 |8 |Node 4 |
|19 |9 |Node 4 |
|20 |10 |Node 4 |
|21 |11 |Node 4 |
|22 |12 |Node 4 |
|23 |13 |Node 4 |
|24 |14 |Node 4 |
|25 |15 |Node 4 |
|26 |16 |Node 4 |
|27 |17 |Node 4 |
|28 |18 |Node 4 |
|29 |19 |Node 4 |
|30 |20 |Node 4 |
|31 |21 |Node 4 |
|32 |22 |Node 4 |
|33 |23 |Node 4 |
|34 |24 |Node 4 |
|35 |25 |Node 4 |
|36 |26 |Node 4 |
|37 |27 |Node 4 |
|38 |28 |Node 4 |
|39 |29 |Node 4 |
|40 |30 |Node 4 |
|41 |31 |Node 4 |
|42 |10 |Node 3 |
|43 |11 |Node 3 |
|44 |12 |Node 3 |
|45 |13 |Node 3 |
|46 |14 |Node 3 |
|47 |15 |Node 3 |
|48 |16 |Node 3 |
|49 |17 |Node 3 |
|50 |18 |Node 3 |
|51 |19 |Node 3 |
|52 |20 |Node 3 |
|53 |21 |Node 3 |
|54 |22 |Node 3 |
|55 |23 |Node 3 |
|56 |24 |Node 3 |
|57 |25 |Node 3 |
|58 |26 |Node 3 |
|59 |27 |Node 3 |
|60 |28 |Node 3 |
|61 |29 |Node 3 |
|62 |30 |Node 3 |
|63 |31 |Node 3 |
|64 |32 |Node 3 |
|65 |33 |Node 3 |
|66 |34 |Node 3 |
|67 |35 |Node 3 |
|68 |36 |Node 3 |
|69 |37 |Node 3 |
|70 |38 |Node 3 |
|71 |39 |Node 3 |
|72 |32 |Node 4 |
|73 |33 |Node 4 |
|74 |34 |Node 4 |
|75 |35 |Node 4 |
|76 |36 |Node 4 |
|77 |37 |Node 4 |
|78 |38 |Node 4 |
|79 |39 |Node 4 |
|80 |40 |Node 4 |
|81 |41 |Node 4 |
|82 |42 |Node 4 |
|83 |43 |Node 4 |
|84 |44 |Node 4 |
|85 |45 |Node 4 |
|86 |46 |Node 4 |
|87 |47 |Node 4 |
|88 |48 |Node 4 |
|89 |49 |Node 4 |
|90 |50 |Node 4 |
|91 |51 |Node 4 |
|92 |52 |Node 4 |
|93 |53 |Node 4 |
|94 |54 |Node 4 |
|95 |55 |Node 4 |
|96 |56 |Node 4 |
|97 |57 |Node 4 |
|98 |58 |Node 4 |
|99 |59 |Node 4 |
will return 2008 as the highest in use a value, even though the table would have only 2000 actual results.
This has serious implications for using a as a High Water Mark; because node 4 backfilled t1 (node 3 jumped
from inserting into a=9 to a=42 above, and from a=71 to a=104), the HWM will miss node4 values. This is a
direct result of behavior modified for bug 31956:
ndb_autoincrement_prefetch_sz to specify prefetch between statements, changed default to1 (with internal
prefetch to at least 32 inside a statement), added handling of updates of pk/unique key with auto_increment
Becasue an Insert-Select does not know how many rows will be returned, 32 rows will be allocated, and will
continue to be used until exhausted, regardless of if 10 rows at a time are moved, or 1 (if x had only been
allowed to grow to 1, for example, a=1 would have had 'Node 4' while the second 'Node 3' row would have
been a=33). Therefore, it is NOT recommended to use Insert-Select statements with Cluster databases if the
auto-incrementing ID is meant to imply an absolute order on the timing of insertion into a table. The developer
will need to explicitly pull out each row from t2 and insert them individually into t1 for the desired effect.
If one column of the unique key is null, then no duplicate-error is catch, and duplicate entry can be inserted.
For example, you have a unique key (`id`, `second`), but the `second` is null when inserted:
then you have 2 entries of (1, null) in the table, opposing to the unique key of (`id`, `second`).
"For multiple-row INSERT statements or INSERT INTO ... SELECT statements, the column is set to the implicit
default value for the column data type. This is 0 for numeric types, the empty string ('') for string types, and the
zero value for date and time types."
also appears to apply to a single row "replace into" query, which can be very confusing to debug when it
appears to not obey the table constraints and just turns nulls/missing columns into empty strings. This can
particularly be a problem if you have a unique constraint on one of those columns.
http://dev.mysql.com/doc/refman/5.1/en/string-literals.html
In the non-LOCAL case, these rules mean that a file named as ./myfile.txt is read from the
server's data directory, whereas the file named as myfile.txt is read from the database directory
of the default database. For example, if db1 is the default database, the following LOAD
DATA statement reads the filedata.txt from the database directory for db1, even though the
statement explicitly loads the file into a table in the db2 database:
LOAD DATA INFILE 'data.txt' INTO
TABLE db2.my_table;
Non-LOCAL load operations read text files located on the server. For security reasons, such
operations require that you have the FILE privilege. See Section 6.2.1, Privileges Provided by
MySQL. Also, non-LOCAL load operations are subject to the secure_file_priv system variable
setting. If the variable value is a nonempty directory name, the file to be loaded must be located in
that directory. If the variable value is empty (which is insecure), the file need only be readable by the
server.
Using LOCAL is a bit slower than letting the server access the files directly, because the contents of
the file must be sent over the connection by the client to the server. On the other hand, you do not
need the FILE privilege to load local files.
LOCAL also affects error handling:
With LOAD DATA INFILE, data-
interpretation and duplicate-key errors
terminate the operation.
With LOAD DATA LOCAL INFILE,
data-interpretation and duplicate-key
errors become warnings and the
operation continues because the
server has no way to stop transmission
of the file in the middle of the
operation. For duplicate-key errors, this
is the same as if IGNORE is
specified. IGNORE is explained further
later in this section.
The REPLACE and IGNORE keywords control handling of input rows that duplicate existing rows on
unique key values:
If you specify REPLACE, input rows
replace existing rows. In other words,
rows that have the same value for a
primary key or unique index as an
existing row. See Section 13.2.8,
REPLACE Syntax.
If you specify IGNORE, rows that
duplicate an existing row on a unique
key value are discarded. For more
information, see Comparison of the
IGNORE Keyword and Strict SQL
Mode.
If you do not specify either option, the
behavior depends on whether
the LOCAL keyword is specified.
Without LOCAL, an error occurs when a
duplicate key value is found, and the
rest of the text file is ignored.
With LOCAL, the default behavior is the
same as if IGNORE is specified; this is
because the server has no way to stop
transmission of the file in the middle of
the operation.
To ignore foreign key constraints during the load operation, issue a SET foreign_key_checks =
0 statement before executing LOAD DATA.
If you use LOAD DATA INFILE on an empty MyISAM table, all nonunique indexes are created in a
separate batch (as for REPAIR TABLE). Normally, this makesLOAD DATA INFILE much faster when
you have many indexes. In some extreme cases, you can create the indexes even faster by turning
them off with ALTER TABLE ... DISABLE KEYS before loading the file into the table and
using ALTER TABLE ... ENABLE KEYS to re-create the indexes after loading the file.
See Section 8.2.4.1, Optimizing INSERT Statements.
For both the LOAD DATA INFILE and SELECT ... INTO OUTFILE statements, the syntax of
the FIELDS and LINES clauses is the same. Both clauses are optional, but FIELDS must
precede LINES if both are specified.
If you specify a FIELDS clause, each of its subclauses (TERMINATED BY, [OPTIONALLY] ENCLOSED
BY, and ESCAPED BY) is also optional, except that you must specify at least one of them.
If you specify no FIELDS or LINES clause, the defaults are the same as if you had written this:
FIELDS TERMINATED BY '\t'
ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '\n' STARTING
BY ''
(Backslash is the MySQL escape character within strings in SQL statements, so to specify a literal
backslash, you must specify two backslashes for the value to be interpreted as a single backslash.
The escape sequences '\t' and '\n' specify tab and newline characters, respectively.)
In other words, the defaults cause LOAD DATA INFILE to act as follows when reading input:
Look for line boundaries at newlines.
Note
If you have generated the text file on a Windows system, you might have to use LINES TERMINATED
BY '\r\n' to read the file properly, because Windows programs typically use two characters as a
line terminator. Some programs, such as WordPad, might use \r as a line terminator when writing
files. To read such files, use LINES TERMINATED BY '\r'.
If all the lines you want to read in have a common prefix that you want to ignore, you can use LINES
STARTING BY 'prefix_string' to skip over the prefix, and anything before it. If a line does not
include the prefix, the entire line is skipped. Suppose that you issue the following statement:
LOAD DATA INFILE '/tmp/test.txt'
INTO TABLE test
FIELDS TERMINATED BY ',' LINES
STARTING BY 'xxx';
If the data file looks like this:
xxx"abc",1
something xxx"def",2
"ghi",3
The resulting rows will be ("abc",1) and ("def",2). The third row in the file is skipped because it
does not contain the prefix.
The IGNORE number LINES option can be used to ignore lines at the start of the file. For example,
you can use IGNORE 1 LINES to skip over an initial header line containing column names:
LOAD DATA INFILE '/tmp/test.txt'
INTO TABLE test IGNORE 1 LINES;
When you use SELECT ... INTO OUTFILE in tandem with LOAD DATA INFILE to write data from
a database into a file and then read the file back into the database later, the field- and line-handling
options for both statements must match. Otherwise, LOAD DATA INFILE will not interpret the
contents of the file properly. Suppose that you use SELECT ... INTO OUTFILE to write a file with
fields delimited by commas:
SELECT * INTO OUTFILE 'data.txt'
FIELDS TERMINATED BY ','
FROM table2;
To read the comma-delimited file back in, the correct statement would be:
LOAD DATA INFILE can be used to read files obtained from external sources. For example, many
programs can export data in comma-separated values (CSV) format, such that lines have fields
separated by commas and enclosed within double quotation marks, with an initial line of column
names. If the lines in such a file are terminated by carriage return/newline pairs, the statement
shown here illustrates the field- and line-handling options you would use to load the file:
LOAD DATA INFILE 'data.txt' INTO
TABLE tbl_name
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
If the input values are not necessarily enclosed within quotation marks, use OPTIONALLY before
the ENCLOSED BY keywords.
Any of the field- or line-handling options can specify an empty string (''). If not empty, the FIELDS
[OPTIONALLY] ENCLOSED BY and FIELDS ESCAPED BYvalues must be a single character.
The FIELDS TERMINATED BY, LINES STARTING BY, and LINES TERMINATED BY values can be
more than one character. For example, to write lines that are terminated by carriage return/linefeed
pairs, or to read a file containing such lines, specify a LINES TERMINATED BY '\r\n'clause.
To read a file containing jokes that are separated by lines consisting of %%, you can do this
CREATE TABLE jokes
(a INT NOT NULL AUTO_INCREMENT
PRIMARY KEY,
joke TEXT NOT NULL);
LOAD DATA INFILE '/tmp/jokes.txt'
INTO TABLE jokes
FIELDS TERMINATED BY ''
LINES TERMINATED BY '\n%%\n'
(joke);
FIELDS [OPTIONALLY] ENCLOSED BY controls quoting of fields. For output (SELECT ... INTO
OUTFILE), if you omit the word OPTIONALLY, all fields are enclosed by the ENCLOSED BY character.
An example of such output (using a comma as the field delimiter) is shown here:
"1","a string","100.20"
"2","a string containing a ,
comma","102.20"
"3","a string containing a \"
quote","102.20"
"4","a string containing a \",
quote and comma","102.20"
If you specify OPTIONALLY, the ENCLOSED BY character is used only to enclose values from
columns that have a string data type (such as CHAR, BINARY, TEXT, orENUM):
1,"a string",100.20
2,"a string containing a ,
comma",102.20
3,"a string containing a \"
quote",102.20
4,"a string containing a \",
quote and comma",102.20
Occurrences of the ENCLOSED BY character within a field value are escaped by prefixing them with
the ESCAPED BY character. Also, if you specify an empty ESCAPED BY value, it is possible to
inadvertently generate output that cannot be read properly by LOAD DATA INFILE. For example, the
preceding output just shown would appear as follows if the escape character is empty. Observe that
the second field in the fourth line contains a comma following the quote, which (erroneously)
appears to terminate the field:
1,"a string",100.20
2,"a string containing a ,
comma",102.20
3,"a string containing a "
quote",102.20
4,"a string containing a ", quote
and comma",102.20
For input, the ENCLOSED BY character, if present, is stripped from the ends of field values. (This is
true regardless of whether OPTIONALLY is specified;OPTIONALLY has no effect on input
interpretation.) Occurrences of the ENCLOSED BY character preceded by the ESCAPED BY character
are interpreted as part of the current field value.
If the field begins with the ENCLOSED BY character, instances of that character are recognized as
terminating a field value only if followed by the field or lineTERMINATED BY sequence. To avoid
ambiguity, occurrences of the ENCLOSED BY character within a field value can be doubled and are
interpreted as a single instance of the character. For example, if ENCLOSED BY '"' is specified,
quotation marks are handled as shown here:
"The ""BIG"" boss" -> The "BIG"
boss
The "BIG" boss -> The "BIG"
boss
The ""BIG"" boss -> The
""BIG"" boss
FIELDS ESCAPED BY controls how to read or write special characters:
For input, if the FIELDS ESCAPED
BY character is not empty, occurrences
of that character are stripped and the
following character is taken literally as
part of a field value. Some two-
character sequences that are
exceptions, where the first character is
the escape character. These
sequences are shown in the following
table (using \ for the escape
character). The rules for NULL handling
are described later in this section.
Character Escape Sequence
\0 An ASCII NUL (X'00') character
\b A backspace character
\n A newline (linefeed) character
\r A carriage return character
\t A tab character.
\Z ASCII 26 (Control+Z)
\N NULL
For more information about \-escape
syntax, see Section 9.1.1, String
Literals.
If the FIELDS ESCAPED BY character
is empty, escape-sequence
interpretation does not occur.
For output, if the FIELDS ESCAPED
BY character is not empty, it is used to
prefix the following characters on
output:
The FIELDS ESCAPED
BY character
The FIELDS [OPTIONALLY]
ENCLOSED BY character
The first character of the FIELDS
TERMINATED BY and LINES
TERMINATED BY values
ASCII 0 (what is actually written
following the escape character is
ASCII 0, not a zero-valued byte)
If the FIELDS ESCAPED BY character
is empty, no characters are escaped
and NULL is output as NULL, not \N. It
is probably not a good idea to specify
an empty escape character,
particularly if field values in your data
contain any of the characters in the list
just given.
In certain cases, field- and line-handling options interact:
The column list can contain either column names or user variables. With user variables,
the SET clause enables you to perform transformations on their values before assigning the result to
columns.
User variables in the SET clause can be used in several ways. The following example uses the first
input column directly for the value of t1.column1, and assigns the second input column to a user
variable that is subjected to a division operation before being used for the value of t1.column2:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, @var1)
SET column2 = @var1/100;
The SET clause can be used to supply values not derived from the input file. The following statement
sets column3 to the current date and time:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, column2)
SET column3 =
CURRENT_TIMESTAMP;
You can also discard an input value by assigning it to a user variable and not assigning the variable
to a table column:
When processing an input line, LOAD DATA splits it into fields and uses the values according to the
column/variable list and the SET clause, if they are present. Then the resulting row is inserted into
the table. If there are BEFORE INSERT or AFTER INSERT triggers for the table, they are activated
before or after inserting the row, respectively.
If an input line has too many fields, the extra fields are ignored and the number of warnings is
incremented.
If an input line has too few fields, the table columns for which input fields are missing are set to their
default values. Default value assignment is described inSection 11.7, Data Type Default Values.
An empty field value is interpreted different from a missing field:
In the following example, we are trying to convert the data in the file for date columns col3, col4 in formats
'mm/dd/yyyy', 'dd/mm/yyyy' into MySQL standard YYYY-mm-dd respectively.
You could convert into any format you want by using the date_format function around the str_to_date().
Example:
...
set col2 = date_format(str_to_date(@col2, 'format'), 'your format')
'...column values are read and written using a field width wide enough to hold all values in the field.'
If you have a VARCHAR(20) column in a multi-byte character set (eg, UTF8), then the "field width wide enough
to hold all values" in this field, measured in bytes, will actually be somewhat greater than 20. The two
workarounds above worked because they both specified character sets which allocate one byte per character
(latin1 and binary).
Specifying the character set in the LOAD DATA INFILE statement does not seem to work around the problem -
that seems only to affect the incoming conversion from bytes to characters, it doesn't affect the number of
bytes read.
The Latin1/binary examples above worked because they weren't trying to load multi-byte characters, however
for someone who was trying to import multi-byte characters (or more specifically, to import character sets like
UTF8 that use variable-width encoding for the characters) it would not work. There doesn't appear to be an
easy workaround that I can see except to write an import utility in another programming language like Perl,
Java or C.
MySQL casts the value into the column type when it reads it from the file, *before* applying the transformations
described in the SET clause. If you instruct it to read it into a variable the value is handled as string from the
beginning.
Now, if you would want to load the males (row[0] == 1) only, create the following table.
The IGNORE prevents mysql to abort the import mid-file due to the missing partition.
Hope this is helpful to someone.
http://www.softwareprojects.com/resources/programming/t-how-to-use-mysql-fast-load-data-for-updates-
1753.html
Then do a SELECT INTO DUMPFILE to save the file somewhere on the server accessible to MySQL.
You will likely be unable to delete this file, so it is best to overwrite it with another DUMPFILE query, this time
giving it an empty string to erase its contents, and you can reuse it again later.
i used a tool called enca on my linux box (apt-get install enca) and converted the file to latin :
# enca import.csv
Universal transformation format 8 bits; UTF-8
to convert it to latin1
and runned the following query to populate my table (encoded in utf8_general_ci)
load data local infile '/home/aissam/Documents/data/import.csv' INTO Table cscripts FIELDS TERMINATED
BY ';';
good luck
In my particular case, I have a CSV of all US Zipcodes. A number of entries in the CSV duplicate other entries.
But, only one entry is marked as the "Preferred" entry, and that's the canonical entry that I want to have end up
in my table. Unfortunately, that entry could appear anywhere in the file - not necessarily first (so I could use
IGNORE) or last (so I could use REPLACE).
Since my table has a unique index on zip, the following allowed me to exclude all non-preferred rows from
being imported:
(Note that the above is a string in a Ruby script that is programmatically executed.)
Since the IGNORE keyword is specified, when zip is set to NULL the row is ignored *assuming* the DBMS is
configured to not auto-convert NULL to the empty string.
If instead the DBMS is configured to auto-convert NULL to the empty string, the single junk row must be
deleted after the above completes.
Don't forget apparmor if you are running Ubuntu - edit /etc/apparmor.d/usr.sbin.mysqld and add /home/milo to
the accessible paths for the mysqld process.
lines terminated by '\r\n' --> because the file comes from a Windows source
Hope that helps!
http://www.corporate-insyte.com/Blogs/MilosBlog.aspx
Posted by John Swapceinski on September 5, 2011
Create a table using the .csv file's header:
#!/bin/sh
# pass in the file name as an argument: ./mktable filename.csv
echo "create table $1 ( "
head -1 $1 | sed -e 's/,/ varchar(255),\n/g'
echo " varchar(255) );"
http://en.positon.org/post/Import-CSV-file-to-MySQL
This is a combination of different samples pieced together. The @col represents the data elements in the file
'$myfile' the table fields are assigned to these variables.
In this case, only inserted in fields mentioned, omitting such as ID, stock, etc.
You can use LOAD DATA INFILE command to import csv file into table.
If every thing is fine.. Please execute following query to LOAD DATA FROM CSV FILE :
<row column1="value1"
column2="value2" .../>
Column names as tags and column
values as the content of these tags:
<row>
<column1>value1</column1>
<column2>value2</column2>
</row>
Column names are the name attributes
of <field> tags, and values are the
contents of these tags:
<row>
<field
name='column1'>value1</field>
<field
name='column2'>value2</field>
</row>
This is the format used by other
MySQL tools, such as mysqldump.
All three formats can be used in the same XML file; the import routine automatically detects the
format for each row and interprets it correctly. Tags are matched based on the tag or attribute name
and the column name.
The following clauses work essentially the same way for LOAD XML as they do for LOAD DATA:
LOW_PRIORITY or CONCURRENT
LOCAL
REPLACE or IGNORE
CHARACTER SET
SET
See Section 13.2.6, LOAD DATA INFILE Syntax, for more information about these clauses.
(field_name_or_user_var, ...) is a list of one or more comma-separated XML fields or user
variables. The name of a user variable used for this purpose must match the name of a field from the
XML file, prefixed with @. You can use field names to select only desired fields. User variables can
be employed to store the corresponding field values for subsequent re-use.
The IGNORE number LINES or IGNORE number ROWS clause causes the first number rows in the
XML file to be skipped. It is analogous to the LOAD DATAstatement's IGNORE ... LINES clause.
Suppose that we have a table named person, created as shown here:
USE test;
Now suppose that we have a simple XML file person.xml, whose contents are as shown here:
<list>
<person person_id="1"
fname="Kapek"
lname="Sainnouine"/>
<person person_id="2"
fname="Sajon" lname="Rondela"/>
<person
person_id="3"><fname>Likame</fnam
e><lname>rrtmons</lname></person
>
<person
person_id="4"><fname>Slar</fname>
<lname>Manlanth</lname></person>
<person><field
name="person_id">5</field><field
name="fname">Stoma</field>
<field
name="lname">Milu</field></person
>
<person><field
name="person_id">6</field><field
name="fname">Nirtam</field>
<field
name="lname">Skld</field></perso
n>
<person
person_id="7"><fname>Sungam</fnam
e><lname>Dulbd</lname></person>
<person person_id="8"
fname="Sraref" lname="Encmelt"/>
</list>
Each of the permissible XML formats discussed previously is represented in this example file.
To import the data in person.xml into the person table, you can use this statement:
mysql> LOAD XML LOCAL INFILE
'person.xml'
-> INTO TABLE person
-> ROWS IDENTIFIED BY
'<person>';
<?xml version="1.0"?>
<resultset statement="SELECT *
FROM test.person"
xmlns:xsi="http://www.w3.org/2001
/XMLSchema-instance">
<row>
<field
name="person_id">1</field>
<field
name="fname">Kapek</field>
<field
name="lname">Sainnouine</field>
</row>
<row>
<field
name="person_id">2</field>
<field
name="fname">Sajon</field>
<field
name="lname">Rondela</field>
</row>
<row>
<field
name="person_id">3</field>
<field
name="fname">Likema</field>
<field
name="lname">rrtmons</field>
</row>
<row>
<field
name="person_id">4</field>
<field
name="fname">Slar</field>
<field
name="lname">Manlanth</field>
</row>
<row>
<field
name="person_id">5</field>
<field
name="fname">Stoma</field>
<field
name="lname">Nilu</field>
</row>
<row>
<field
name="person_id">6</field>
<field
name="fname">Nirtam</field>
<field
name="lname">Skld</field>
</row>
<row>
<field
name="person_id">7</field>
<field
name="fname">Sungam</field>
<field
name="lname">Dulbd</field>
</row>
<row>
<field
name="person_id">8</field>
<field
name="fname">Sreraf</field>
<field
name="lname">Encmelt</field>
</row>
</resultset>
Note
The --xml option causes the mysql client to use XML formatting for its output; the -e option causes
the client to execute the SQL statement immediately following the option. See Section 4.5.1,
mysql The MySQL Command-Line Tool.
You can verify that the dump is valid by creating a copy of the person table and importing the dump
file into the new table, like this:
mysql> USE test;
mysql> CREATE TABLE person2 LIKE
person;
Query OK, 0 rows affected (0.00
sec)
<list>
<person person_id="1">
<fname>Robert</fname>
<lname>Jones</lname>
<address address_id="1"
street="Mill Creek Road"
zip="45365" city="Sidney"/>
<address address_id="2"
street="Main Street" zip="28681"
city="Taylorsville"/>
</person>
<person person_id="2">
<fname>Mary</fname>
<lname>Smith</lname>
<address address_id="3"
street="River Road" zip="80239"
city="Denver"/>
<!-- <address address_id="4"
street="North Street" zip="37920"
city="Knoxville"/> -->
</person>
</list>
You can again use the test.person table as defined previously in this section, after clearing all the
existing records from the table and then showing its structure as shown here:
mysql< TRUNCATE person;
Query OK, 0 rows affected (0.04
sec)
-- remove all xml-in the first record before first occurrence of start tag <program>
UPDATE myxmltable
SET xmldata=CONCAT('<program', SUBSTRING_INDEX(xmldata,'<program',-1))
WHERE xmldata LIKE '%<program%' ;
value:
{expr | DEFAULT}
value_list:
value [, value] ...
assignment:
col_name = value
assignment_list:
assignment [, assignment] ...
REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a
new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is
inserted. See Section 13.2.5, INSERT Syntax.
REPLACE is a MySQL extension to the SQL standard. It either inserts, or deletes and inserts. For
another MySQL extension to standard SQLthat either inserts orupdatessee Section 13.2.5.2,
INSERT ... ON DUPLICATE KEY UPDATE Syntax.
DELAYED inserts and replaces were deprecated in MySQL 5.6. In MySQL 5.7, DELAYED is not
supported. The server recognizes but ignores the DELAYED keyword, handles the replace as a
nondelayed replace, and generates an ER_WARN_LEGACY_SYNTAX_CONVERTED warning.
(REPLACE DELAYED is no longer supported. The statement was converted to REPLACE.)
The DELAYED keyword will be removed in a future release.
Note
REPLACE makes sense only if a table has a PRIMARY KEY or UNIQUE index. Otherwise, it becomes
equivalent to INSERT, because there is no index to be used to determine whether a new row
duplicates another.
Values for all columns are taken from the values specified in the REPLACE statement. Any missing
columns are set to their default values, just as happens forINSERT. You cannot refer to values from
the current row and use them in the new row. If you use an assignment such
as SET col_name = col_name + 1, the reference to the column name on the right hand side is
treated as DEFAULT(col_name), so the assignment is equivalent to SET col_name =
DEFAULT(col_name) + 1.
To use REPLACE, you must have both the INSERT and DELETE privileges for the table.
If a generated column is replaced explicitly, the only permitted value is DEFAULT. For information
about generated columns, see Section 13.1.18.8, CREATE TABLE and Generated Columns.
REPLACE supports explicit partition selection using the PARTITION keyword with a list of comma-
separated names of partitions, subpartitions, or both. As with INSERT, if it is not possible to insert the
new row into any of these partitions or subpartitions, the REPLACE statement fails with the
error Found a row not matching the given partition set. For more information and
examples, see Section 22.5, Partition Selection.
The REPLACE statement returns a count to indicate the number of rows affected. This is the sum of
the rows deleted and inserted. If the count is 1 for a single-row REPLACE, a row was inserted and no
rows were deleted. If the count is greater than 1, one or more old rows were deleted before the new
row was inserted. It is possible for a single row to replace more than one old row if the table contains
multiple unique indexes and the new row duplicates values for different old rows in different unique
indexes.
The affected-rows count makes it easy to determine whether REPLACE only added a row or whether
it also replaced any rows: Check whether the count is 1 (added) or greater (replaced).
If you are using the C API, the affected-rows count can be obtained using
the mysql_affected_rows() function.
You cannot replace into a table and select from the same table in a subquery.
MySQL uses the following algorithm for REPLACE (and LOAD DATA ... REPLACE):
1. Try to insert the new row into the table
It is possible that in the case of a duplicate-key error, a storage engine may perform the REPLACE as
an update rather than a delete plus insert, but the semantics are the same. There are no user-visible
effects other than a possible difference in how the storage engine increments Handler_xxx status
variables.
Because the results of REPLACE ... SELECT statements depend on the ordering of rows from
the SELECT and this order cannot always be guaranteed, it is possible when logging these
statements for the master and the slave to diverge. For this reason, REPLACE ...
SELECT statements are flagged as unsafe for statement-based replication. such statements produce
a warning in the error log when using statement-based mode and are written to the binary log using
the row-based format when using MIXED mode. See also Section 16.2.1.1, Advantages and
Disadvantages of Statement-Based and Row-Based Replication.
When modifying an existing table that is not partitioned to accommodate partitioning, or, when
modifying the partitioning of an already partitioned table, you may consider altering the table's
primary key (see Section 22.6.1, Partitioning Keys, Primary Keys, and Unique Keys). You should
be aware that, if you do this, the results of REPLACE statements may be affected, just as they would
be if you modified the primary key of a nonpartitioned table. Consider the table created by the
following CREATE TABLE statement:
CREATE TABLE test (
id INT UNSIGNED NOT NULL
AUTO_INCREMENT,
data VARCHAR(64) DEFAULT NULL,
ts TIMESTAMP NOT NULL DEFAULT
CURRENT_TIMESTAMP ON UPDATE
CURRENT_TIMESTAMP,
PRIMARY KEY (id)
);
When we create this table and run the statements shown in the mysql client, the result is as follows:
CREATE TABLE T (
`id` int(10) unsigned NOT NULL auto_increment,
PRIMARY KEY (`id`)
);
CREATE TABLE F (
`foreign_id` int(10) unsigned NOT NULL,
CONSTRAINT `fkey` FOREIGN KEY (`foreign_id`) REFERENCES `T` (`id`) ON DELETE
CASCADE ON UPDATE CASCADE
);
Please note that REPLACE INTO is a much slower performer than an UPDATE statement. Keep in mind that a
REPLACE INTO requires a test on the keys, and if a matching unique key is found on any or all columns, a
DELETE FROM is executed, then an INSERT is executed. There's a lot of management of rows involved in
this, and if you're doing it frequently, you'll hurt your performance unless you simply cannot do with any other
syntax.
The only time when I can see where you'd actually need a REPLACE INTO is when you have multiple unique
constraints on a table, and need to drop any rows that would match any of the constraints. Then REPLACE
INTO becomes more efficient from DELETE FROM... INSERT INTO...
If you're looking at a single unique column table (Primary Key), please use UPDATE, or INSERT. Also, check
out INSERT ... ON DUPLIATE KEY UPDATE... as an alternative if you're willing to stick to MySQL 4.1+
We did not realize this and had the triggers that should be triggered on DELETE triggered.
After checking all the code, we just found a script that does a replace to refresh the values of some fields.
We should have had used "insert into ... on duplicate update" syntax instead.
Simply having an Auto-Increment as a primary key will insert a new record with the same VALUES (...),(...);
whenever the same "REPLACE INTO..." query is executed.
"For multiple-row INSERT statements or INSERT INTO ... SELECT statements, the column is set to the implicit
default value for the column data type. This is 0 for numeric types, the empty string ('') for string types, and the
zero value for date and time types."
also appears to apply to a single row "replace into" query, which can be very confusing to debug when it
appears to not obey the table constraints and just turns nulls/missing columns into empty strings. This can
particularly be a problem if you have a unique constraint on one of those columns.
mysql> SELECT 1 + 1;
-> 2
You are permitted to specify DUAL as a dummy table name in situations where no tables are
referenced:
mysql> SELECT 1 + 1 FROM DUAL;
-> 2
DUAL is purely for the convenience of people who require that all SELECT statements should
have FROM and possibly other clauses. MySQL may ignore the clauses. MySQL does not
require FROM DUAL if no tables are referenced.
In general, clauses used must be given in exactly the order shown in the syntax description. For
example, a HAVING clause must come after any GROUP BY clause and before any ORDER BY clause.
The exception is that the INTO clause can appear either as shown in the syntax description or
immediately following theselect_expr list. For more information about INTO, see Section 13.2.9.1,
SELECT ... INTO Syntax.
The list of select_expr terms comprises the select list that indicates which columns to retrieve.
Terms specify a column or expression or can use *-shorthand:
A select list consisting only of a single
unqualified * can be used as
shorthand to select all columns from all
tables:
SELECT * FROM t1 INNER JOIN
t2 ...
tbl_name.* can be used as a qualified
shorthand to select all columns from
the named table:
SELECT t1.*, t2.* FROM t1
INNER JOIN t2 ...
Use of an unqualified * with other
items in the select list may produce a
parse error. To avoid this problem, use
a qualified tbl_name.* reference
SELECT AVG(score), t1.* FROM
t1 ...
The following list provides additional information about other SELECT clauses:
A select_expr can be given an alias
using AS alias_name. The alias is
used as the expression's column name
and can be used in GROUP BY, ORDER
BY, orHAVING clauses. For example:
SELECT CONCAT(last_name,',
',first_name) AS full_name
FROM mytable ORDER BY
full_name;
The AS keyword is optional when
aliasing a select_expr with an
identifier. The preceding example
could have been written like this:
SELECT CONCAT(last_name,',
',first_name) full_name
FROM mytable ORDER BY
full_name;
However, because the AS is optional, a
subtle problem can occur if you forget
the comma between
two select_expr expressions:
MySQL interprets the second as an
alias name. For example, in the
following statement, columnb is
treated as an alias name:
SELECT columna columnb FROM
mytable;
For this reason, it is good practice to
be in the habit of using AS explicitly
when specifying column aliases.
It is not permissible to refer to a
column alias in a WHERE clause,
because the column value might not
yet be determined when
the WHERE clause is executed.
See Section B.5.4.4, Problems with
Column Aliases.
The FROM table_references clause
indicates the table or tables from which
to retrieve rows. If you name more than
one table, you are performing a join.
For information on join syntax,
see Section 13.2.9.2, JOIN Syntax.
For each table specified, you can
optionally specify an alias.
tbl_name [[AS] alias]
[index_hint]
The use of index hints provides the
optimizer with information about how to
choose indexes during query
processing. For a description of the
syntax for specifying these hints,
see Section 8.9.4, Index Hints.
You can use SET
max_seeks_for_key=value as an
alternative way to force MySQL to
prefer key scans instead of table
scans. See Section 5.1.5, Server
System Variables.
You can refer to a table within the
default database as tbl_name, or
as db_name.tbl_name to specify a
database explicitly. You can refer to a
column
ascol_name, tbl_name.col_name,
or db_name.tbl_name.col_name. You
need not specify
a tbl_name or db_name.tbl_name pre
fix for a column reference unless the
reference would be ambiguous.
See Section 9.2.1, Identifier
Qualifiers, for examples of ambiguity
that require the more explicit column
reference forms.
A table reference can be aliased
using tbl_name AS alias_name or tb
l_name alias_name:
SELECT t1.name, t2.salary
FROM employee AS t1, info AS
t2
WHERE t1.name = t2.name;
SELECT t1.name, t2.salary
FROM employee t1, info t2
WHERE t1.name = t2.name;
Columns selected for output can be
referred to in ORDER BY and GROUP
BY clauses using column names,
column aliases, or column positions.
Column positions are integers and
begin with 1:
SELECT college, region, seed
FROM tournament
ORDER BY region, seed;
SELECT college, region AS r,
seed AS s FROM tournament
ORDER BY r, s;
SELECT college, region, seed
FROM tournament
ORDER BY 2, 3;
To sort in reverse order, add
the DESC (descending) keyword to the
name of the column in the ORDER
BY clause that you are sorting by. The
default is ascending order; this can be
specified explicitly using
the ASC keyword.
If ORDER BY occurs within a subquery
and also is applied in the outer query,
the outermost ORDER BY takes
precedence. For example, results for
the following statement are sorted in
descending order, not ascending order:
(SELECT ... ORDER BY a) ORDER
BY a DESC;
Use of column positions is deprecated
because the syntax has been removed
from the SQL standard.
A SELECT from a partitioned table using a storage engine such as MyISAM that employs table-level
locks locks only those partitions containing rows that match the SELECT statement WHERE clause.
(This does not occur with storage engines such as InnoDB that employ row-level locking.) For more
information, seeSection 22.6.4, Partitioning and Locking.
Use IF function to select the key value of the sub table as in:
SELECT
SUM(IF(beta_idx=1, beta_value,0)) as beta1_value,
SUM(IF(beta_idx=2, beta_value,0)) as beta2_value,
SUM(IF(beta_idx=3, beta_value,0)) as beta3_value
FROM alpha JOIN beta WHERE alpha_id = beta_alpha_id;
This will create 3 columns with totals of beta values according to their idx field
Unfortunately, variables cannot be used in the LIMIT clause, otherwise the entire thing could be done
completely in SQL.
If your tables are not all that big, a simpler method is:
SELECT * FROM foo ORDER BY RAND(NOW()) LIMIT 1;
This finds all the CarIndex values in the Dealer's catalog that are in the bigger distributor catalog.
How do I then find the dealer CarIndex values that ARE NOT in the bigger catalog?
The answer is to use LEFT JOIN - anything that doesn't join is given a NULL value , so we look for that:
SELECT db1.*
FROM tbl_data db1, tbl_data k2
WHERE db1.id <> db2.id
AND db1.name = db2.name
(I'm not sure wether the code is correct but in my case it works)
Johann
For example: The table 'runs' contains 34876 rows. 205 rows have an 'info' field containing the string 'wrong'.
To select those rows for which the 'info' column does *NOT* contain the word 'wrong' one has to do:
mysql> select count(*) FROM runs WHERE info is null or info not like '%wrong%';
+----------+
| count(*) |
+----------+
| 34671 |
+----------+
but not:
mysql> select count(*) FROM runs WHERE info not like %wrong%';
+----------+
| count(*) |
+----------+
| 5537 |
+----------+
$min=1;
$row=mysql_fetch_assoc(mysql_query("SHOW TABLE STATUS LIKE 'table';"));
$max=$row["Auto_increment"];
$random_id=rand($min,$max);
$row=mysql_fetch_assoc(mysql_query("SELECT * FROM table WHERE id='$random_id'");
Voila...
Cezar
http://RO-Escorts.com
Regards,
Geert van der Ploeg
The /* construct will stop DBMS's other than MySQL from parsing the comment contents, while /*! will tell ALL
MySQL versions to parse the "comment" (which is actually a non-comment to MySQL). The /*!40000 construct
will tell MySQL servers starting from 4.0.0 (which is the first version to support SQL_CALC_FOUND_ROWS) to
parse the comment, while earlier versions will ignore it.
SELECT * [or any needed fileds], idx*0+RAND() as rnd_id FROM tablename ORDER BY rnd_id LIMIT 1 [or the
number of rows]
In a database with personal information (name, surname, etc..) with an auto_increment index I wanted to
retrieve all the entries with same name and surname field (duplicate names), which by accident were inserted
to the base.
I hope this might be of help to anyone that wants to do some extended maintenance on the database
SELECT *
FROM ccr_news
WHERE insert_date > 0;
or, if for some reason MySQL still uses a full table scan:
SELECT *
FROM ccr_news FORCE INDEX (ccr_news_insert_date_i)
WHERE insert_date > 0;
id name category
1 Henry Miller 2
3 June Day 1
3 Thomas Wolf 2
id category
1 Modern
2 Classics
Then just select the categories from the reference table and put
the list into a numbered array. Then in your script when you run
across a category number from the first recordset just reference
the value from the index in the second array to obtain the value.
In php in the above example it might look like:
This may seem obvious to some but I was pulling my hair out
trying to figure out how to order a recordset based on a list
from a different table. Hope this helps someone.
Ed
Make sure current user has (NOT) a write permission in that directory.
Regards,
Kumar.S
An example is a name/address form where the country is a selectable list. If most of your users are from the
UK and US you may want to do something like:
+----------+----------------------------------------+
| iso_code | name |
+----------+----------------------------------------+
| UK | United Kingdom |
| US | United States |
| AF | Afghanistan |
| AL | Albania |
| DZ | Algeria |
| AS | American Samoa |
$sql9 = "SELECT DISTINCT field AS distinctfield FROM table ORDER BY distinctfield ";
$res9= $db->execute($sql9);
for($ll=0;$ll<$res9->getNumTuples();$ll++)
{
$row = $res9->getTupleDirect($ll);
$distinctfield = $row[distinctfield];
$sql8="select * from table WHERE field='distinctfield' ORDER BY distinctfield LIMIT 1";
}
Fahed
then you can check the output. note that there are twice as many rows as records, because each unique row is
followed by its count (in this case count=1). so just toss the .txt file into something and sort on the field
containing the count and throw out all the rows =1. this is the same result as a select * distinct FIELD (as far as
I can tell).
count(FIELD)=>1
Lex
An example is a name/address form where the country is a selectable list. If most of your users are from the
UK and US you may want to do something like:
+----------+----------------------------------------+
| iso_code | name |
+----------+----------------------------------------+
| UK | United Kingdom |
| US | United States |
| AF | Afghanistan |
| AL | Albania |
| DZ | Algeria |
| AS | American Samoa |
_______________________________________________
******************************************
If found that if you also add in another 'iso_code' column in the order by statment after the first one containing
the IN() statment, it will sort the remaining records:
SELECT * FROM countries ORDER by iso_code IN ('UK', 'US') desc, iso_code
This will add the text headers Fiscal Year, Location and Sales to your fields. Only caveat is with an ORDER BY
statement, if you don't want your headers sorted along with your data you need to enclose it in parenthesis:
So the whole query for retrieving a whole row with one field distinct is:
Make sure that the format of the columns that match up with your headers doesn't limit the display of the
headers. For instance, I was using the UNION tip to add a header to a column defined as char(2) (for storing a
two-letter state code). The resulting CSV file only displayed the first two letters of my column header. The fix is
simple, just use CAST() on the column in the second SELECT to convert it to the appropriate type. In my case,
doing something like this:
SELECT 'state header' FROM table UNION SELECT CAST(state AS char) FROM table INTO OUTFILE [...]
For example, if you want to get a list of users from a table UserActions sorted according to the most recent
action (based on a field called Time) the query would be:
SELECT * FROM (SELECT * FROM UserActions ORDER BY Time DESC) AS Actions GROUP BY UserID
ORDER BY Time DESC;
Without the subquery, the group is performed first, and so the first record that appears in the database (which
is not necessarily in the order you want) will be used to determine the sort order. This caused me huge
problems as my data was in a jumbled order within the table.
--Edit--
This same result can be achieved with the use of MAX(Time), so the query would be:
As far as I can see, the subquery model still holds up if you need more complex sorting before performing the
GROUP.
CREATE TEMPORARY TABLE dupes SELECT * FROM tablename GROUP BY colname HAVING
COUNT(*)>1 ORDER BY colname;
SELECT t.* FROM tablename t, dupes d WHERE t.colname = d.colname ORDER BY t.colname;
Two things:
1) The options in mysqldump can be in any order, because they are true command-line options (that is, they
are conceptually used together, but syntactically separate on the mysqldump command line). The options in the
SELECT...INTO OUTFILE need to be in the exact order as specified in the documentation above.
2) The options MUST have dashes between the words (e.g., fields-enclosed-by) when use as options with the
mysqldump utility, but MUST NOT have dashes when used as options with the SELECT...INTO OUTFILE. This
may not be clear in the documentation above.
Wayne
If you want to select all fields from distinct rows why not use:
SELECT DISTINCT * FROM table GROUP BY field;
Don't forget the DISTINCT relates to the ORDER BY / GROUP BY and has nothing to do with the 'select_expr'
Kumar/Germany
Example:
I prefer this way of sorting table by column values listed in another table:
The accnumber column in primary table contains ID values from ID column in the secondary table.
I havn't found a way to do it in SQL, here is a way to do it in PHP (just replace 'order_by' to the field you want to
order by):
As noted above, the output directory must be writable by the id under which the mysqld process is running. Use
"grep user= /etc/my.cnf " to find it.
I use this method to transfer small amounts of data from our live database to our test database, for example
when investigating a reported problem in our program code. (We cannot guarantee the field order across all our
databases.)
SELECT id_class,id FROM tbl,(SELECT MAX(val) AS val FROM tbl GROUP BY id_class) AS _tbl WHERE
tbl.val = _tbl.val;
We had a table logging state changes for a series of objects and wanted to find the most recent state for each
object. The "val" in our case was an auto-increment field.
This seems to be the simplest solution that runs in a reasonable amount of time.
INSERT INTO `classdescription` VALUES (2, 'firstaid', '', '2005-01-02 11:00:00', 2);
INSERT INTO `classdescription` VALUES (3, 'advanced-med', '', '2005-01-02 13:00:00', 1);
Now use RIGHT JOIN to list all class descriptions along with signups if any,
SELECT cs.s_ClassID, cs.s_PersonID, cd.ClassName, cd.ClassID, cd.ClassType, cd.ClassDate, cd.ClassMax
from class_signups cs RIGHT JOIN
classdescription cd on (cs.s_ClassID = cd.ClassID )
in itself, not too useful, but you can see classes
having no one signed up as a NULL.
Now we show only classes where the count of signups is less than ClassMax, meaning the
class has openings!
SELECT cs.s_ClassID, COUNT(s_ClassID) AS ClassTotal, cd.ClassName, cd.ClassID, cd.ClassType,
cd.ClassDate, cd.ClassMax
from class_signups cs RIGHT JOIN
classdescription cd on (cs.s_ClassID = cd.ClassID )
GROUP BY cd.ClassID
HAVING ClassTotal < cd.ClassMax
The HAVING clause limits the after-JOIN output rows to ones matching its criteria, discarding others!
We may want to look only at the firstaid ClassType, so add a WHERE clause to
the JOIN,
SELECT cs.s_ClassID, COUNT(s_ClassID) AS ClassTotal, cd.ClassName, cd.ClassID, cd.ClassType,
cd.ClassDate, cd.ClassMax
from class_signups cs RIGHT JOIN
classdescription cd on (cs.s_ClassID = cd.ClassID ) WHERE cd.ClassType='firstaid'
GROUP BY cd.ClassID
HAVING ClassTotal < cd.ClassMax
Now there are no outputs as firstaid is full, but
suppose we are looking in this list with respect
to a certain student PersonID==12. That is, we want to see classes this person can signup
for, including the ones they are already in!
In the case we need to disregard signups by PersonID==12 for e.g.,
-- insert all ids (and optional labels (for use in a page selector))
SET @a=-1;
INSERT INTO AllRows SELECT @a:=@a+1 AS rownum, id, CONCAT(first_name, ' ', last_name) AS label
FROM yourTable;
The NumberSeq table just contains the numbers 0, 1, 2, 3, ... 500 (or whatever limit you want to set on number
of pages..):
With smaller record sets the second approach is faster than the prepared statement approach. Haven't
checked speed with bigger record sets, but suspect the first approach will win then...
Hope this helps to get around the limitations of the LIMIT clause. To the MySQL team: any plans to allow user
variables in the LIMIT clause? (pleeeze!)
+----------+---------------------------+
| id | name |
+----------+---------------------------+
| 23 | rene |
| 234 | miguel |
| 543 | ana |
| 23 | tlaxcala |
SELECT * FROM table ORDER BY FIELD( name, 'miguel', 'rene', 'ana', 'tlaxcala' )
+----------+---------------------------+
| id | name |
+----------+---------------------------+
| 234 | miguel |
| 23 | rene |
| 543 | ana |
| 23 | tlaxcala |
Althoug there is still overhead compared to "order by null" its not as bad as "order by rand()".
SELECT partnum, comments , if( partnum >0, cast( partnum AS SIGNED ) , 0 ) AS numpart,
if( partnum >0, 0, partnum ) AS stringpart
FROM `part`
ORDER BY `numpart` , `stringpart`
Posted by Frank Flynn on October 6, 2006
If you wish to use OUTFILE or DUMPFILE with a variable for the file name you cannot simply put it in place -
MySQL will not resolve the name.
But you can put the whole command into a variable and use "prepare" and "execute" for example:
The problem with that solution from the MySQL standpoint is that there still remains the possibility of duplicate
selections when we want more than one row, especially if the table is not that large (e.g. what are the chances
of getting at least 2 duplicate rows while selecting 5 randomly, 1 at a time, out of a set of 10).
My approach is to rather generate unique random numbers from php, then fetch the corresponding table rows:
1- Use the appropriate php methods to fetch the table count from MySQL as done before:
SELECT COUNT(*) FROM foo;
2- Use php to generate some unique random numbers based on the count.
This is the php function that i use. It takes 3 arguments: the minimum and maximum range values, and the
amount of unique random numbers to be returned. It returns these numbers as an array.
<?php
/*Array of Unique Random Numbers*/
function uniq_rand($min,$max,$size){
$randoms=array(); //this is our array
return $randoms;
}
?>
3- Once you receive your set of randoms from the above function, perform a query for each random:
<?php
foreach($randoms as $random_row){
$query="SELECT * FROM foo LIMIT $random_row, 1;"
//perform query, retrieve values and move on to the next random row
...
}
?>
That's it
-----
On a side note regarding the php random number generation function that I have here, I'm sure it's not the best
solution all the time. For example, the closer the amount of random numbers gets to the range of numbers
available the less efficient the function gets, i.e. if you have a range of 300 numbers and you want 280 of them
unique and random, the function could spend quite some time trying to get the last 10 numbers into the array.
Some probabilities get involved here, but I suspect that it would be faster to insert the 300 numbers directly into
an array, shuffle that array, then finally select the 280 first entries and return them.
Also, as pointed earlier in the thread, keep in mind that if your table isn't that large, just performing the following
works very well (e.g. selecting 5 random rows on a moderately large table):
SELECT * FROM foo ORDER BY RAND() LIMIT 5;
It very easy, but i think it query very big for MySQL if table2 contain around 1000-5000 rows, and site have
5000-6000 people per second.
Now my task more complexed, i need select any site from table2 :
1 - On current language
2 - If site have not title, Select title by default language
3 - If site have not title on default, Select title by any language.
I think if will make it by thats method - it will very big for MySQL.
This finds all the CarIndex values in the Dealer's catalog that are in the bigger distributor catalog.
How do I then find the dealer CarIndex values that ARE NOT in the bigger catalog?
The answer is to use LEFT JOIN - anything that doesn't join is given a NULL value , so we look for that:
I have found that the Left Join is quite expensive when doing this type of SQL Query. It is great if you have less
than 1000 records in each table that you want to compare. But the real hardship is realized when you have
100,000 records in each table. Trying to do this type of join takes forever because each and every record in 1
table has to be compared to each and every record in the other table. In the case of 100,000 records, MySQL
will do 10 BILLION comparisons (from what I have read, I may be mistaken).
So I tried the sql query above to see which rows in 1 table do not have a corresponding value in the other table.
(Note that each table had close to 100,000 rows) I waited for 10 minutes and the Query was still going. I have
since came up with a better way that works for me and I hope it will work for someone else. Here goes....
1: You must create another field in your base table. Let's call the new field `linked` (For the example above, we
would perform this query ---ONLY ONCE--- to create the linked field in the DealerCatalog table.)
2: Now to get your results, simply execute the following queries instead of the left join query stated above
I know it is 3 queries instead of 1 but I am able to achieve the same result with 100K rows in each table in
about 3 seconds instead of 10 minutes (That is just how long I waited until I gave up. Who knows how long it
actually takes) using the LEFT JOIN method.
I would like to see if anyone else has a better way of dealing with this type of situation. I have been looking for
a better solution for a few years now. I haven't tried MySQL 5 yet to see if there is a way to maybe create a
view to deal with this situation but I suspect MySQL developers know about the expensive LEFT JOIN....IS
NULL situation on large tables and are doing something about it.
For our search feature, we needed to get an id using a stored function. Since it was in the WHERE clause, it
reprocesses the function for every row! This could turn out to be pretty heavy.
In our case we went from 6.5 sec query to 0.48 sec. We have over 2 million rows in our tables.
-- Query
SELECT
s.`state_code`,
s.`state_name`,
IF(state_id IN
(SELECT d2s.state_id
FROM drove_to_states d2s
WHERE driver_id = '%u'
), 1, null)
`selected`
FROM `states` s,
ORDER BY `state_name` ASC;
Using PHP's sprintf command, we can create a select field using this query:
<?php
[...]
$driver_id = 1;
define("QUERY", (SEE ABOVE) );
define("OPTION",'<option value="%s"%s>%s</option>');
$query = mysql_query(sprintf(QUERY, $driver_id), $connect);
echo '<select>';
while(list($code,$state,$selected) = mysql_fetch_row($query))
{
$selected = is_null($selected) ? null : ' selected';
echo sprintf(OPTION, $code, $selected, $state);
}
echo '</select>';
[...]
?>
| Objects Attributes
+----+------+------+ +-------+-----+-------+
| id | type | name | | objId | key | value |
+====+======+======+ +=======+=====+=======+
| 1 | T1 | O1 | | 1 | K1 | V1 |
| 2 | T2 | O2 | | 1 | K2 | V2 |
| 3 | T1 | O3 | | 2 | K3 | V3 |
| 4 | T2 | O4 | | 2 | K4 | V4 |
| | 2 | K5 | V5 |
| | 3 | K1 | V6 |
| | 3 | K2 | V7 |
The common approach for selecting the attributes of each object into a single result-row per object is to join
Objects with Attributes multiple times. However, not only such SELECT can grow very big and ugly, with large
tables it becomes very slow.
This could be dealt with using group-by functions, so to select all the objects of type T1, use the following SQL:
| SELECT
| o.id,
| o.name,
| MAX(IF(a.key='K1', a.value, null)) as K1,
| MAX(IF(a.key='K2', a.value, null)) as K2
| FROM
| Objects o,
| Attributes a
| WHERE
| o.id = a.objid and
| o.type = 'T1'
| GROUP BY
| a.id
The result will be:
+----+------+----+----+
| id | name | K1 | K2 |
+====+======+====+====+
| 1 | O1 | V1 | V2 |
| 3 | O3 | V6 | V7 |
http://www.halfgaar.net/sql-joins-are-easy
You should use the following syntax to create a CSV file in the format expected by Microsoft Excel:
... INTO OUTFILE '/temp.csv' FIELDS ESCAPED BY '""' TERMINATED BY ',' ENCLOSED BY '"' LINES
TERMINATED BY '\r\n';
However fields with carriage returns may break the CSV as MySQL will automatically close a field when the \r\n
line break is found. To work around this, replace all \r\n breaks with \n. The field does not close on \n breaks
and it will be read into a single cell in Excel. You can do this in the same SQL statement, for example:
SELECT REPLACE(field_with_line_breaks, '\r\n', '\n') FROM table INTO OUTFILE '/temp.csv' FIELDS
ESCAPED BY '""' TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
I also found that null values could break the CSV. These can be handled in a similar way:
SELECT IFNULL(possible_null_field, "") FROM table INTO OUTFILE '/temp.csv' FIELDS ESCAPED BY '""'
TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
Note: this replaces NULL values with an empty string which is technically not the same thing but it will give you
an empty cell in Excel instead of breaking the CSV structure and shifting the following cells to the left.
SELECT 'cheap' AS priceCat, productName productCount FROM MyProducts WHERE price < 1000
UNION
SELECT 'moderate' AS priceCat, productName FROM MyProducts WHERE price >= 1000 AND price <2000
UNION
SELECT 'expensive' AS priceCat, productName FROM MyProducts WHERE price >= 2000
It essentially returns a two column result set. The first column contains the word 'cheap', 'moderate' or
'expensive' depending on the price of the product. The second column is the product name. This query can
easily be modified to return a count of number of products categorized by the price range:
SELECT 'cheap' AS priceCat, COUNT(*) productCount FROM MyProducts WHERE price < 1000
UNION
SELECT 'moderate' AS priceCat, COUNT(*) FROM MyProducts WHERE price >= 1000 AND price <2000
UNION
SELECT 'expensive' AS priceCat, COUNT(*) FROM MyProducts WHERE price >= 2000
It may sound like an obvious thing the an experienced SQL guy, but I think this tip will be useful to a beginner.
Hope this tip helps a SQL developer soon! ;-)
I have had issues passing SQL statements into arrays, when the "ARRAY_A" declaration is not made.
This totals the 'groupings' but then removes those rows from the query. At the moment it is believed that an
optimisation was performed for the 'WITH ROLLUP' that didn't make it into the main optimisation...
HTH
(This creates a tab-delimited file test.dat with column names in the first row followed by the query results.)
$sql2 = "SELECT * FROM $tbl_name WHERE CompanyName LIKE '%". $query ."%' OR description LIKE '%".
$query ."%' OR KeywordTags LIKE '%". $query ."%' AND Active='yes' AND State=Florida ";
$sql2 = "SELECT * FROM $tbl_name WHERE (CompanyName LIKE '%". $query ."%' OR description LIKE '%".
$query ."%' OR KeywordTags LIKE '%". $query ."%' AND Active='yes') AND State=Florida ";
Regards,
Elliot
http://www.sioure.com
www.puribe.cl
1) Let `task` be a MySql table containing at least 2 columns: id (primary key), pid (parent id - may be NULL) so
that the rows form a classic tree structure.
2) Suppose you want to extract the tree relative to a particular actual id (constituted by itself and all its spawns)
so that you need a recursive select which is unfortunately not implemented in MySql.
-----------------
Note: Don't omit to set the global variable max_sp_recursion_depth to an adequate positive value (for instance
in the file 'my.ini').
This will add the text headers Fiscal Year, Location and Sales to your fields. Only caveat is with an ORDER BY
statement, if you don't want your headers sorted along with your data you need to enclose it in parenthesis:
END QUOTE...
Here is a more dynamic option for adding column_names to the top of output...
SELECT Place, count(*) FROM Testing WHERE SUBSTRING(Place,2,1) IN ('c', 'u','s') GROUP BY Place
HAVING count(*)>=2 ORDER BY count(*)asc, Place DESC;
SELECT CONCAT(employer.companyname, ', ', employer.division ,', ', city, ', ', statecode, ' ', zipcode) AS
"Employer Info"
FROM employer, interview
WHERE employer.companyname=interview.companyname and
employer.division=interview.division
and listing='y'
ORDER BY zipcode ASC, employer.companyname DESC, employer.division ASC;
SELECT statecode FROM state WHERE statecode NOT IN (SELECT location FROM quarter);
Many people find subqueries more
readable than complex joins or unions.
Indeed, it was the innovation of
subqueries that gave people the
original idea of calling the early
SQL Structured Query Language.
Here is an example statement that shows the major points about subquery syntax as specified by
the SQL standard and supported in MySQL:
DELETE FROM t1
WHERE s11 > ANY
(SELECT COUNT(*) /* no hint */
FROM t2
WHERE NOT EXISTS
(SELECT * FROM t3
WHERE ROW(5*t2.s1,77)=
(SELECT 50,11*s1 FROM t4
UNION SELECT 50,77 FROM
(SELECT * FROM t5) AS
t5)));
A subquery can return a scalar (a single value), a single row, a single column, or a table (one or
more rows of one or more columns). These are called scalar, column, row, and table subqueries.
Subqueries that return a particular kind of result often can be used only in certain contexts, as
described in the following sections.
There are few restrictions on the type of statements in which subqueries can be used. A subquery
can contain many of the keywords or clauses that an ordinarySELECT can contain: DISTINCT, GROUP
BY, ORDER BY, LIMIT, joins, index hints, UNION constructs, comments, functions, and so on.
A subquery's outer statement can be any one of: SELECT, INSERT, UPDATE, DELETE, SET, or DO.
In MySQL, you cannot modify a table and select from the same table in a subquery. This applies to
statements such as DELETE, INSERT, REPLACE, UPDATE, and (because subqueries can be used in
the SET clause) LOAD DATA INFILE.
For information about how the optimizer handles subqueries, see Section 8.2.2, Optimizing
Subqueries, Derived Tables, and View References. For a discussion of restrictions on subquery
use, including performance issues for certain forms of subquery syntax, see Section C.4,
Restrictions on Subqueries.
Ever wanted to turn an AUTO_INCRIMENT primary key into one of those 'rolling ID' columns? i.e. the type
which changes back to ID = 1 when some other part of your (new) PK changes... Use a subquery!
TABLE t1...
AUTO_INCR_PK <-> X
1 <-> A
2 <-> A
3 <-> A
4 <-> B
5 <-> B
6 <-> B
7 <-> C
8 <-> C
9 <-> D
TABLE t2 ...
ID <-> X
1 <-> A
2 <-> A
3 <-> A
1 <-> B
2 <-> B
3 <-> B
1 <-> C
2 <-> C
1 <-> D
Cool eh?
First, you create a selection and then you use it in your real selection. This is a kind of subquery :)
SELECT *,(SELECT COUNT(*) FROM table2 WHERE table2.field1 = table1.id) AS count FROM table1
WHERE table1.field1 = 'value'
This command will enable you to count fields in table2 based on a column value in table1 and label the result
as "count". The value in table1.field1 can be any valid field type.
INSERT INTO table2 (field1, field2, field3, field4) (SELECT 'value1 from user input', field1, field2, field3 from
table1)
The 4 fields in table2 will be populated by the 4 fields (including the string) returned by the SELECT sub-query
respectively.
I know this MIGHT raise issues with speed of queries but it's better than writing long lines of PHP code that
does those 3 things - even with a framework! I can just use the mysql_affected_rows() after that query to see if
everything went fine.
NOTE: Make sure the number of fields in the SELECT query is EXACTLY the same number of fields you are
about to insert.
http://dev.mysql.com/doc/refman/5.1/en/subquery-restrictions.html
that shows you how to use a temporary table. But you can also create a View based on a table, use that for the
SELECT statement and then use the regular table name for the UPDATE / DELETE statement.
UPDATE people,
(SELECT count(*) as votecount, person_id
FROM votes GROUP BY person_id) as tally
SET people.votecount = tally.votecount
WHERE people.person_id = tally.person_id
Posted by Devang Modi on August 30, 2011
Combine queries for Insert and Select always obeys Innodb locking rules
if one of the source table is based on Innodb engine.
It is also possible that the INSERT activity applicable to TEMPORARY
table which is not InnoDB engine. It is also possible that in SELECT
section with INNODB, some other TEMPORARY Tables are used.
Devang Modi
User Comments
Posted by mrfox on February 6, 2007
when the same subquery is used several times, mysql does not use this fact to optimize the query, so be
careful not to run into performance problems.
example:
SELECT
col0,
(SELECT col1 FROM table1 WHERE table1.id = table0.id),
(SELECT col2 FROM table1 WHERE table1.id = table0.id)
FROM
table0
WHERE ...
the join of table0 with table1 is executed once for EACH subquery, leading to very bad performance for this
kind of query.
Posted by Rami Jamleh on June 3, 2012
Oracle says that you can left join via scalar sub-queries
e.g
create table x (id int auto_increment primary key,name varchar(20),yid int);
and they say outer joins may have negative impact on performance
Here is an example of a common-form subquery comparison that you cannot do with a join. It finds
all the rows in table t1 for which the column1 value is equal to a maximum value in table t2:
SELECT * FROM t1
WHERE column1 = (SELECT MAX(column2) FROM t2);
Here is another example, which again is impossible with a join because it involves aggregating for
one of the tables. It finds all rows in table t1 containing a value that occurs twice in a given column:
SELECT * FROM t1 AS t
WHERE 2 = (SELECT COUNT(*) FROM t1 WHERE t1.id = t.id);
For a comparison of the subquery to a scalar, the subquery must return a scalar. For a comparison
of the subquery to a row constructor, the subquery must be a row subquery that returns a row with
the same number of values as the row constructor. See Section 13.2.10.5, Row Subqueries.
User Comments
Sign UpLoginYou must be logged in to post a comment.
User Comments
Posted by anonymous on September 2, 2004
A NOT IN condition, otoh, would be false. NOT IN is always going to be at least as hard to satisfy (as <>
SOME) because to be true /all/ rows have to be non-equal.
#table score , each record contains the score of each reviewing and each questions
CREATE TABLE score
(
reviewingid integer ,
score integer,
FOREIGN KEY (reviewingid) REFERENCES reviewing (reviewingid)
);
Suppose you want to create foreign keys on an existing table with orphans. If you try to ALTER TABLE to
create the foreign keys, get the dreaded:
error 1452 "cannot add or update a child row: foreign key constaint fails"
This is a good thing, because after implementing foreign keys you want your tables to be consistent, and not
contain orphans, so you must delete them beforehand.
Given tables 'child' and 'parent' where child.parent_id is a foreign key referencing parent.id, use the following to
clean up any orphans.
User Comments
Sign UpLoginYou must be logged in to post a comment.
SELECT * FROM t1
WHERE (col1,col2) = (SELECT col3, col4 FROM t2 WHERE id = 10);
SELECT * FROM t1
WHERE ROW(col1,col2) = (SELECT col3, col4 FROM t2 WHERE id = 10);
For both queries, if the table t2 contains a single row with id = 10, the subquery returns a single
row. If this row has col3 and col4 values equal to the col1and col2 values of any rows in t1,
the WHERE expression is TRUE and each query returns those t1 rows. If
the t2 row col3 and col4 values are not equal the col1 and col2 values of any t1 row, the
expression is FALSE and the query returns an empty result set. The expression is unknown (that
is, NULL) if the subquery produces no rows. An error occurs if the subquery produces multiple rows
because a row subquery can return at most one row.
For information about how each operator works for row comparisons, see Section 12.3.2,
Comparison Functions and Operators.
The expressions (1,2) and ROW(1,2) are sometimes called row constructors. The two are
equivalent. The row constructor and the row returned by the subquery must contain the same
number of values.
A row constructor is used for comparisons with subqueries that return two or more columns. When a
subquery returns a single column, this is regarded as a scalar value and not as a row, so a row
constructor cannot be used with a subquery that does not return at least two columns. Thus, the
following query fails with a syntax error:
User Comments
Sign UpLoginYou must be logged in to post a comment.
User Comments
Posted by ersin yilmaz on February 24, 2004
Store 'si' is in all cities if and only if for this store we can not find a city cj such that 'si' does not exist.
so the second select statement SELECT * FROM cities WHERE NOT EXISTS is applied to detemine that.
best
EXISTS function provides a simple way to find intersection between tables (INTERSECT operator from
relational model).
If we have table1 and table2, both having id and value columns, the intersection could be calculated like this:
SELECT * FROM table1 WHERE EXISTS(SELECT * FROM table2 WHERE table1.id=table2.id AND
table1.value=table2.value)
I'm coming from an MS SQL background (not my fault, honest) and would like to add that 'exists' does not
operate as I expected it to in interactive mode.
but in interactive mode, this produces a syntax error. Oddly enough, though, this does work in a stored
procedure.
Well...
Its a challenge!
Cheers.
For example:
CREATE TABLE states
(
state_id int auto_increment not null,
state_code char(2) not null,
state_name varchar(100) not null,
UNIQUE(state_code),
PRIMARY KEY(state_id)
);
mysql> select 100 from states where not exists (select 1 from states where state_id=1);
Empty set (0.00 sec)
I also come from an MSSQL background (also not my fault as I like to work), and found that this works in place
of "IF EXISTS() THEN":
DECLARE v_user INTEGER DEFAULT (SELECT `user` FROM user_privacy WHERE `user` = p_user);
I won't speak to the efficiency of that, but there it is. (I wouldn't store VARCHARs or TEXTs, etc. that way, for
no reason).
It can also be written with the SELECT inline with the IF statement:
IF (SELECT `user` FROM user_privacy WHERE `user` = p_user) IS NOT NULL THEN
-- do whatever
END IF;
.. but I'm in the habit of sticking things like that into vars for re-use in longer SPs.
Hi,
I wonder if anyone tested the last example "What kind of store is present in all cities?"
it did not work for me, but instead i used this query in below ..
select count(stype), stype , city from cities_stores S group by S.stype having count(stype) = ( select
count(distinct D.city) from cities_stores D )
any thought?
Thanks,
Do not use "exists subquery" with "loose index scan" in different column.
Because you do not have access to the entire index, there may be problems when "exists subquery" is
performed based on another column than "group by".
example)
CREATE TABLE `g1` (
`seq` int(11) NOT NULL,
`key1` bigint(20) DEFAULT NULL,
PRIMARY KEY (`seq`),
KEY `idx1` (`key1`)
);
CREATE TABLE `g2` (
`seq` int(11) NOT NULL,
`key2` bigint(20) DEFAULT NULL,
PRIMARY KEY (`seq`),
KEY `idx1` (`key2`)
);
+-----+------+
| seq | key1 |
+-----+------+
| 1 | 10 |
| 2 | 20 |
| 3 | 30 |
| 4 | 30 |
... (There are enough rows to do a "loose index scan")
+-----+
| seq |
+-----+
| 1 |
| 2 |
| 3 |
| 4 |
+-----+
mysql> select key1 from g1 where exists (select 1 from g2 where g1.seq=g2.seq) group by g1.key1;
+------+
| key1 |
+------+
| 10 |
| 20 |
| 30 |
+------+
+------+
| key1 |
+------+
| 10 |
| 20 |
+------+
When using "loose index scan", only the seq = 3 which is located first among the items with key1 = 30.
Therefore, even if seq = 4 exists in "exists subquery", it returns there is no item with key1 of 30.
+------+-----+
| key1 | seq |
+------+-----+
| 10 | 1 |
| 20 | 2 |
| 30 | 3 |
| 30 | 4 |
...
Aggregate functions in correlated subqueries may contain outer references, provided the function
contains nothing but outer references, and provided the function is not contained in another function
or expression.
Another example of when a subquery is optimized is when using a subquery based on a multipart primary key
to 'join' the two tables.
For example
[sql]
DELETE FROM t1 WHERE ROW(c1,c2) IN (
SELECT c1, c2 FROM t2
);
[/sql]
You could easily select the above 'join' using a 'where t1.c1=t2.c1 and t1.c2=t2.c2', but a delete statement
won't know what to do (and fails) over the joined tables.
Note that according to bug #9090 ( http://bugs.mysql.com/bug.php?id=9090 ) it appears that ALL subqueries
will be executed as correlated subqueries until version 5.2 or 6.0, when the optimizer may gain the ability to
correctly execute uncorrelated subqueries.
Ryan Findley is good to point out the bug report, but as far as I can tell it is not true that MySQL rewrites ALL
subqueries as correlated subqueries. Rather, the problem that the bug report identifies is that the MySQL
optimizer does that to subqueries that test an IN statement. This is listed as one of the current limitations of
MySQL's subquery support here (third item down, "Subquery optimization for IN..."):
http://dev.mysql.com/doc/refman/5.0/en/subquery-restrictions.html
So if you are writing a subquery that does not use IN, I think that you will be able to keep it uncorrelated.
However, I have not tested this.
If you have a slow 'correlated' subquery with IN, you can optimize it with a join to get around the bug described
by Ryan and Stephen. After the optimization the execution time is no longer O(MN).
SELECT AVG(sum_column1)
FROM (SELECT SUM(column1) AS sum_column1
FROM t1 GROUP BY column1) AS t1;
Notice that the column name used within the subquery (sum_column1) is recognized in the outer
query.
Derived tables can return a scalar, column, row, or table.
Derived tables cannot be correlated subqueries, or contain outer references or references to other
tables of the same SELECT.
The optimizer determines information about derived tables in such a way that materialization of them
does not occur for EXPLAIN. See Section 8.2.2.3, Optimizing Derived Tables and View
References.
It is possible under certain circumstances that using EXPLAIN SELECT will modify table data. This
can occur if the outer query accesses any tables and an inner query invokes a stored function that
changes one or more rows of a table. Suppose that there are two tables t1 and t2 in database d1,
and a stored function f1that modifies t2, created as shown here:
CREATE DATABASE d1;
USE d1;
CREATE TABLE t1 (c1 INT);
CREATE TABLE t2 (c1 INT);
CREATE FUNCTION f1(p1 INT) RETURNS INT
BEGIN
INSERT INTO t2 VALUES (p1);
RETURN p1;
END;
Referencing the function directly in an EXPLAIN SELECT has no effect on t2, as shown here:
mysql> SELECT * FROM t2;
Empty set (0.02 sec)
User Comments
Sign UpLoginYou must be logged in to post a comment.
2017, Oracle Corporation and/or its affiliates
User Comments
Posted by Guy Gordon on November 25, 2006
Wanting to copy a longtext field from one record to another, I first tried:
Update table set list=(select list from t1 where recno=230) where recno=169
I expected this to select one value from record 230 and copy it into record 169. Instead it fails with Error 1093.
Even though this is a single scalar value, MySQL will not let you use the same table in both the update and
from parts.
I got the desired result using a temp variable and two queries:
Set @Guy = (select list from t1 where recno=230);
Update t1 set list=@Guy where recno=169
Note that the semicolon separates the two statements (in phpMyAdmin). Since the temp variable is connection
specific, the two queries must be run together.
There is a workaround for the use of LIMIT in a subquery, just use a variable (seperate query, execute this one
first):
SET @i = 0;
And then the select-query with the subquery including the LIMIT:
SELECT
*
FROM
my_table
WHERE
id_my_other_table IN(
SELECT id FROM my_other_table
WHERE
( @i := ( @i +1 ) ) <= 10
);
When I try to run update query for my table "comments", MySQL returns the #1093 - You can't specify target
table 'comments' for update in FROM clause message. My contrived table structure and update query are as
follow:
CREATE TABLE comments(id int primary key, phrase text, uid int);
INSERT INTO comments VALUES(1, 'admin user comments',1), (2, 'HR User Comments',2), (3, 'RH User
Comments',2);
Is there any easy way to work around the #1093 - You can't specify target table 'comments' for update in
FROM clause error?
Since MySQL materializes sub queries in the FROM Clause as temporary tables, wrapping the subquery into
another inner subquery in the FROM Clause causes it to be executed and stored into a temporary table, then
referenced implicitly in the outer subquery. So, the update query will succeed by rewriting it like below:
FROM:
http://www.mysqlfaqs.net/mysql-faqs/Errors/1093-You-can-not-specify-target-table-comments-for-update-in-
FROM-clause
Well, I'd a scenario where I'd a table with one BLOB field to store images. I'd about 3000 records in the
database. For first, we needed a default image for all these 3000 records. So, I inserted an image into the table
using phpmyadmin for the first record. Now, what I wanted to do was to copy this image from first record and
update (paste) all the 2999 records instead of uploading the image for each record. I tried hard to find a
solution. Asked many friends, but none of them had the answer. Eventually, I'd to create a copy of that table
and then run the following query :
update table1 set image = (select image from table2 where id=1)
This worked for me ! This is the fastest and easiest way to do it. If anyone else has a better way to do it, do let
me know.
SELECT table1.*
FROM table1 LEFT JOIN table2 ON table1.id=table2.id
WHERE table2.id IS NULL;
A LEFT [OUTER] JOIN can be faster than an equivalent subquery because the server might be
able to optimize it bettera fact that is not specific to MySQL Server alone. Prior to SQL-92, outer
joins did not exist, so subqueries were the only way to do certain things. Today, MySQL Server and
many other modern database systems offer a wide range of outer join types.
MySQL Server supports multiple-table DELETE statements that can be used to efficiently delete rows
based on information from one table or even from many tables at the same time. Multiple-
table UPDATE statements are also supported. See Section 13.2.2, DELETE Syntax,
and Section 13.2.11, UPDATE Syntax.
User Comments
Posted by on January 17, 2004
Here is a rough description of one more method to do something equivalent to a subquery in a DELETE for
older versions of MySQL that do not support it. It only works for the case where there is a unique key on the
table. It requires making a temporary table and a temporary column on the table from which you wish to delete.
However it's all done in a sequence of SQL commands which is a little cleaner than the other methods.
Let's say the table from which you want to delete is "origtable". First create a temporary table with the same
structure as origtable (though without autoincrement columns), let's call it "temptable". Then fill it with the rows
of origtable that you wish to delete, for example:
It's important that all results from the SELECT statement should have unique keys that are present in origtable,
in other words don't do something weird here like taking the cosine of the primary key. Next create a temporary
new column on origtable to designate which rows you intend to delete, and default this to 1:
Next set tempdeleteflag to 0 for the rows that are present in temptable using REPLACE:
REPLACE origtable SELECT id, col1, col2, ..., colN, 0 FROM temptable;
Now tempdeleteflag should be set to 0 for rows you intended to keep or 1 for rows you inteded to delete, so:
DELETE FROM origtable WHERE tempdeleteflag = 1;
If you created temptable using TEMPORARY it will go away when your session ends, otherwise drop it now.
You say to create a "temptable" and fill it with the rows you wish to delete... this should say "fill it with the rows
you wish to KEEP", alternatively the following SQL DELETE statement should be inverted so that you delete
those "where tempdeleteflag = 0";
Why? Well, you are defaulting the origtable to have a value of 1 in the tempdeleteflag column, and then setting
it to 0 for those that you want to DELETE. (0 means delete) obviously then the DELETE statement should
delete those that are 0. Alternatively, if you set it to 0 for those that you want to KEEP (0 means keep, then 1
means DELETE) then you DELETE those that are set to 1. Be very careful to get it right.
Posted by Are you mortal Then prepare to die. on November 24, 2004
About the last comment, presumably the select involves a join which a straight delete could not.
A final method is to use a left join to remove those rows you dont want from the results set and insert that
results set into a new table and swap that table witht the table you wanted to delete from...
DELETE FROM
WHERE
ROW(SUNID1,SUNID2) IN (
SELECT
SUNID1,SUNID2
FROM
toExcludeFromInterfaceBreakdown
);
value:
{expr | DEFAULT}
assignment:
col_name = value
assignment_list:
assignment [, assignment] ...
Multiple-table syntax:
If you update a column that has been declared NOT NULL by setting to NULL, an error occurs if strict
SQL mode is enabled; otherwise, the column is set to the implicit default value for the column data
type and the warning count is incremented. The implicit default value is 0 for numeric types, the
empty string ('') for string types, and the zero value for date and time types. See Section 11.7,
Data Type Default Values.
If a generated column is updated explicitly, the only permitted value is DEFAULT. For information
about generated columns, see Section 13.1.18.8, CREATE TABLE and Generated Columns.
UPDATE returns the number of rows that were actually changed. The mysql_info() C API function
returns the number of rows that were matched and updated and the number of warnings that
occurred during the UPDATE.
You can use LIMIT row_count to restrict the scope of the UPDATE. A LIMIT clause is a rows-
matched restriction. The statement stops as soon as it has foundrow_count rows that satisfy
the WHERE clause, whether or not they actually were changed.
If an UPDATE statement includes an ORDER BY clause, the rows are updated in the order specified by
the clause. This can be useful in certain situations that might otherwise result in an error. Suppose
that a table t contains a column id that has a unique index. The following statement could fail with a
duplicate-key error, depending on the order in which rows are updated:
UPDATE t SET id = id + 1;
For example, if the table contains 1 and 2 in the id column and 1 is updated to 2 before 2 is updated
to 3, an error occurs. To avoid this problem, add an ORDER BY clause to cause the rows with
larger id values to be updated before those with smaller values:
UPDATE t SET id = id + 1 ORDER BY
id DESC;
You can also perform UPDATE operations covering multiple tables. However, you cannot use ORDER
BY or LIMIT with a multiple-table UPDATE. Thetable_references clause lists the tables involved in
the join. Its syntax is described in Section 13.2.9.2, JOIN Syntax. Here is an example:
UPDATE items,month SET
items.price=month.price
WHERE items.id=month.id;
The preceding example shows an inner join that uses the comma operator, but multiple-
table UPDATE statements can use any type of join permitted in SELECTstatements, such as LEFT
JOIN.
If you use a multiple-table UPDATE statement involving InnoDB tables for which there are foreign key
constraints, the MySQL optimizer might process tables in an order that differs from that of their
parent/child relationship. In this case, the statement fails and rolls back. Instead, update a single
table and rely on the ON UPDATE capabilities that InnoDB provides to cause the other tables to be
modified accordingly. See Section 14.8.1.6, InnoDB and FOREIGN KEY Constraints.
You cannot update a table and select from the same table in a subquery.
An UPDATE on a partitioned table using a storage engine such as MyISAM that employs table-level
locks locks only those partitions containing rows that match the UPDATE statement WHERE clause, as
long as none of the table partitioning columns are updated. (For storage engines such
as InnoDB that employ row-level locking, no locking of partitions takes place.) For more information,
see Section 22.6.4, Partitioning and Locking.
Table A
+--------+-----------+
| A-num | text |
| 1 | |
| 2 | |
| 3 | |
| 4 | |
| 5 | |
+--------+-----------+
Table B:
+------+------+--------------+
| B-num| date | A-num |
| 22 | 01.08.2003 | 2 |
| 23 | 02.08.2003 | 2 |
| 24 | 03.08.2003 | 1 |
| 25 | 04.08.2003 | 4 |
| 26 | 05.03.2003 | 4 |
+--------+------------------------+
| A-num | text |
| 1 | 24 from 03 08 2003 / |
| 2 | 22 from 01 08 2003 / |
| 3 | |
| 4 | 25 from 04 08 2003 / |
| 5 | |
--------+-------------------------+
(only one field from Table B is accepted)
+--------+--------------------------------------------+
| A-num | text |
| 1 | 24 from 03 08 2003 |
| 2 | 22 from 01 08 2003 / 23 from 02 08 2003 / |
| 3 | |
| 4 | 25 from 04 08 2003 / 26 from 05 03 2003 / |
| 5 | |
+--------+--------------------------------------------+
This facilitates to update table1 column with expression whose corresponding value from table2 is returned as
NULL
Posted by Adam Boyle on March 2, 2004
It took me a few minutes to figure this out, but the syntax for UPDATING ONE TABLE ONLY using a
relationship between two tables in MySQL 4.0 is actually quite simple:
"You are using safe update mode and you tried to update a table without...etc."
...then it may be that your .cnf file must be edited to disable safemode. This worked for me. In order for the
change in the .cnf file to take effect, you must have permission to restart mysqld in the server OS environment.
There is a page in the online documentation that explains safe mode entitled 'safe Server Startup Script'.
Notes: That index addition is necessary because on larger tables mySQL would rather die than figure to
(internally) index a single column join.
I had a problem - a had to update a column "rate" but if the existince or new value is greater then 5 this "5" will
be finally value in field.
So, I do it in one "magick" query ;)
Here an example:
update item
set rate = case when round((rate+3)/2) < 6 then round((rate+3)/2) else 5 end
where id = 1 and rate <= 6;
greetings
pecado
UPDATE xoops_bb_posts_text
SET post_text=(
REPLACE (post_text,
'morphix.sourceforge.net',
'www.morphix.org'));
using the string function REPLACE, all items in the post_text column with 'morphix.sourceforge.net' get this
substring replaced by 'www.morphix.org'. Ideal when writing a script is just too much effort.
update some_table
set col = col + 1
where key = 'some_key_value'
and @value := col
The @value := col will always evaluate to true and will store the col value before the update in the @value
variable.
select @value;
update Table1 t1
join Table2 t2 on t1.ID=t2.t1ID
join Table3 t3 on t2.ID=t3.t2ID
set t1.Value=12345
where t3.ID=54321
Senario is, ID 8 has multiple records, only the last (highest) record needs to be changed
I would prefer update t1 set c1='NO' WHERE ID=8 AND RECNO = (SELECT MAX(RECNO) FROM T1
WHERE ID=8)
UPDATE table1 SET table1field = (SELECT MAX(table2.table2field) FROM table2 WHERE table1.table1field =
table2.table2field)
This can be helpful if you need to create a temporary table storing an ID (for, say, a person) and a "last date"
and already have another table storing all dates (for example, all dates of that person's orders).
update item
set rate = case when round((rate+3)/2) < 6 then round((rate+3)/2) else 5 end
where id = 1 and rate <= 6;
update item
set rate = least(round((rate+3)/2), 5)
where id = 1 and rate <= 6;
while TRUE {
..UPDATE table SET value = 1
....WHERE value = 0 and name = 'name'
..if no. of rows affected > 0, break
..else wait and try again
}
The code above waits until the semaphore is "cleared" (value = 0) and then "sets" it (value = 1). When done,
you "clear" the semaphore by
UPDATE table SET value = 0 WHERE name = 'name'
The assumption is that the UPDATE is "atomic" in that no concurrent access by another process can occur
between testing and setting the value field.
A very server resources friendly method to update multiple rows in the same table is by using WHEN THEN
(with a very important note).
The note is: do not forget ELSE. If you do not use it, all rows that are outside the range of your updated values
will be set to blank!
e.g. A table that contains entries of different categories, in which an internal order needs to represented ( lets
say a table with busstops on different routes). If you add new entries or move stops from one route to another
you will most likely want to increment the position of the busstop within this route. That's how you can do it
table busstops
I doubt this could be done otherwise since referencing the table you wish to update within the subquery creates
circular references
After DELETE or UPDATE i.e. when a row of a subset is lost/deleted/moved away from it, the whole subset will
need to be reordered. This can be done similarily :
SET @pos=0;
UPDATE busstops SET pos = ( SELECT @pos := @pos +1 ) WHERE route = 1 ORDER BY pos ASC
Chris H (chansel0049)
In version 5, however, the above query only updated one element while still matching "all"
This set the values of field 'f2' according to the values of field 'f3' in the rows field f5 'afected'.
--------------
Pretty cool. What I'm doing here is copying the information I need from the row where job_id=1 to the row
where job_id=6, on the same table.
That strikes me as an elegant syntax. Here is the closest I could come up with for doing that on Oracle:
update t1 set t1.field=(select value from t2 where t1.this=t2.that) where t1.this in (select that from t2);
> The @value := col will always evaluate to true and will store the col value before the update in the @value
variable.
In fact, in won't if `col` is NULL (0, empty string etc.) - then the condition is not met and the update query won't
be processed. The correct condition would be:
A summary table (in this case created to hold summary counts of other genealogy data, based on the two fields
that make up the PRIMARY key) often contains unique key fields and one or more summary totals (Cnt in this
case). Additional ranking fields in the summary table can be easily updated to contain rankings of the Cnt field
using the IF function and
local variables.
Table DDL:
After populating the table with rows containing key and summary data (and leaving the rank field(s) to be
updated in
a subsequent step), the rank fields can be updated using syntax similar to the following:
It looks convoluted, but is really quite simple. The @rnk variable needs to be initialized, and the keyval variable
(in this case @gedid or @snmid) needs to be set to a value that will not be matched by the first record. The IF()
function checks the previous key value (left side) against the current key value (right side), and either
increments the @rnk variable when the desired key value is the same as the previous records, or reset the
@rnk variable to 1 when the key value changes.
This can be easily extended to accomodate ranking on more than one key value, and does not require sub-
selects that take considerable resources for a large table.
This example intentionally assigns different ranks to equal values of Cnt for a given key, to facilitate reporting
where column headings contain the rank value.
gets optimised out as 'true' and @value is left as NULL after the update.
update some_table
set col = col + 1
where key = 'some_key_value'
and ((@value := col) IS NULL OR (@value := col) IS NOT NULL)
So you get a true value either way and value will get set. Be careful what you put on the right-hand-side as it
could get evaluated twice.
This is true but there are two simple ways around this limit.
1) nest the subquery 2 deep so it is fully materialized before the update runs. For example:
Update t1 set v1 = t3.v1 where id in
(select t2.id, t2.v1 from (select id, v1 from t1) t2) t3
When updating one table using values obtained from another table, the manual describes the "update table1,
table2" syntax, but does not delve into the correlated subquery approach very much. It also does not point out
a VERY important execution difference.
+--------+----------+
| col_pk | col_test |
+--------+----------+
| 1 | NULL |
| 2 | NULL |
+--------+----------+
2 rows in set
+-------------+--------------+
| col_pk_join | col_test_new |
+-------------+--------------+
| 1 | 23 |
| 1 | 34 |
| 2 | 45 |
+-------------+--------------+
3 rows in set
+--------+----------+
| col_pk | col_test |
+--------+----------+
| 1 | 23 |
| 2 | 45 |
+--------+----------+
2 rows in set
Note that the update did NOT produce any errors or warnings. It should have. Why? Because a join on value 1
produces two values from table test_2. Two values cannot fit into a space for one. What MySQL does in this
case is use the first value and ignore the second value. This is really bad in my opinion because it is, in
essence, putting incorrect data into table test_1.
This will produce the appropriate error for the given data:
"ERROR 1242 : Subquery returns more than 1 row"
and will not perform any update at all, which is good (it protects table test_1 from getting bad data).
Now if you have different data........if you comment out one of the "1" values inserted into table test_2 and use
the correlated subquery update instead of the multi-table update, table test_1 will get updated with exactly what
you expect.
The moral of this example/tip/bug-report: do not use the multi-table update. Use the correlated subquery
update instead. It's safe. If you keep getting an error when you think you shouldn't, you either have bad data in
your source table or you need to rework your subquery such that it produces a guaranteed one-row result for
each destination row being updated.
The reason I call the multi-table update a bug is simply because I feel it should produce the same or similar
error as the correlated subquery update. My hope is that MySQL AB will agree with me.
#Cambio todos los valores a NULL (para que no haya riesgo de valores duplicados con restricciones UNIQUE)
UPDATE MiTabla SET columna=NULL;
#Declaro una variable como contador (puede ser 1,2,3... o el num desde donde queremos empezar)
SET @c:=1;
#Consulta
UPDATE MiTabla SET columna=(SELECT @c:=@c+1);
#Ahora podemos usar ALTER TABLE nuevamente si queremos cambiar la columna a NOT NULL (en caso de
que la hayamos cambiado)
Tengan en cuenta de que los indices principales (los declarados como PRIMARY KEY, por ejemplo, o los que
se usan para linquear tablas) NO DEBERIAN CAMBIARSE, ya que se estropearian los vinculos entre las
tablas! Esto podria evitarse declarando las claves foraneas (FOREIGN KEY) de las tablas de linqueo con el
valor ON UPDATE CASCADE (lo que al actualizar los indices refrescaria los links entre las tablas).
update db1.a, (
select distinct b.col1, b.col2
from db2.b, db2.c, db2.d
where b.col1<>'' and d.idnr=b.idnr and c.user=d.user and c.role='S'
order by b.col1) as e
set a.col1 = e.col1
where a.idnr = e.col1
The point is that every select statement returns a table. Name the result and you can access its columns. In
this example I called the result 'e'.
I never could figure out how to set the value of multiple columns with nesting a select statement dedicated to
each column. Now I've got it. I'm attaching a transcript of doing it both ways. The statements use the tables that
already exist in the mysql schema (at least in 5.0), so you can easily recreate this on your box in a test schema.
--------------
DROP TABLE IF EXISTS test
--------------
--------------
CREATE TABLE test (t_id INT,k_id INT, t_name CHAR(64), t_desc TEXT) AS
SELECT help_topic_id AS t_id, help_keyword_id AS k_id, NULL AS t_name, NULL AS t_desc FROM
mysql.help_relation LIMIT 10
--------------
--------------
SELECT * FROM test
--------------
+------+------+--------+--------+
| t_id | k_id | t_name | t_desc |
+------+------+--------+--------+
| 0 | 0 | NULL | NULL |
| 327 | 0 | NULL | NULL |
| 208 | 1 | NULL | NULL |
| 409 | 2 | NULL | NULL |
| 36 | 3 | NULL | NULL |
| 388 | 3 | NULL | NULL |
| 189 | 4 | NULL | NULL |
| 169 | 5 | NULL | NULL |
| 393 | 6 | NULL | NULL |
| 17 | 7 | NULL | NULL |
+------+------+--------+--------+
10 rows in set (0.00 sec)
--------------
######
## This is the elegant single select solution! ##
######
UPDATE test AS t, (SELECT * FROM mysql.help_topic) AS h SET
t.t_name=h.name,
t.t_desc=substr(h.url,1-locate('/',reverse(h.url)))
WHERE t.t_id=h.help_topic_id
--------------
--------------
SELECT * FROM test
--------------
+------+------+------------------+---------------------------+
| t_id | k_id | t_name | t_desc |
+------+------+------------------+---------------------------+
| 0 | 0 | JOIN | join.html |
| 327 | 0 | SELECT | select.html |
| 208 | 1 | REPEAT LOOP | repeat-statement.html |
| 409 | 2 | ISOLATION | set-transaction.html |
| 36 | 3 | REPLACE INTO | replace.html |
| 388 | 3 | LOAD DATA | load-data.html |
| 189 | 4 | CREATE FUNCTION | create-function.html |
| 169 | 5 | CHANGE MASTER TO | change-master-to.html |
| 393 | 6 | CHAR | string-type-overview.html |
| 17 | 7 | SHOW COLUMNS | show-columns.html |
+------+------+------------------+---------------------------+
10 rows in set (0.03 sec)
--------------
DROP TABLE IF EXISTS test
--------------
--------------
CREATE TABLE test (t_id INT,k_id INT, t_name CHAR(64), t_desc TEXT) AS
SELECT help_topic_id AS t_id, help_keyword_id AS k_id, NULL AS t_name, NULL AS t_desc FROM
mysql.help_relation LIMIT 10
--------------
--------------
SELECT * FROM test
--------------
+------+------+--------+--------+
| t_id | k_id | t_name | t_desc |
+------+------+--------+--------+
| 0 | 0 | NULL | NULL |
| 327 | 0 | NULL | NULL |
| 208 | 1 | NULL | NULL |
| 409 | 2 | NULL | NULL |
| 36 | 3 | NULL | NULL |
| 388 | 3 | NULL | NULL |
| 189 | 4 | NULL | NULL |
| 169 | 5 | NULL | NULL |
| 393 | 6 | NULL | NULL |
| 17 | 7 | NULL | NULL |
+------+------+--------+--------+
10 rows in set (0.00 sec)
--------------
######
## This is the nasty one select for each column that needs to be updated method! ##
######
UPDATE test AS t SET
t.t_name=(SELECT name FROM mysql.help_topic WHERE t.t_id=help_topic_id),
t.t_desc=(SELECT substr(url,1-locate('/',reverse(url))) FROM mysql.help_topic WHERE t.t_id=help_topic_id)
--------------
--------------
SELECT * FROM test
--------------
+------+------+------------------+---------------------------+
| t_id | k_id | t_name | t_desc |
+------+------+------------------+---------------------------+
| 0 | 0 | JOIN | join.html |
| 327 | 0 | SELECT | select.html |
| 208 | 1 | REPEAT LOOP | repeat-statement.html |
| 409 | 2 | ISOLATION | set-transaction.html |
| 36 | 3 | REPLACE INTO | replace.html |
| 388 | 3 | LOAD DATA | load-data.html |
| 189 | 4 | CREATE FUNCTION | create-function.html |
| 169 | 5 | CHANGE MASTER TO | change-master-to.html |
| 393 | 6 | CHAR | string-type-overview.html |
| 17 | 7 | SHOW COLUMNS | show-columns.html |
+------+------+------------------+---------------------------+
10 rows in set (0.00 sec)
Bye
summary(X,A,B,C,D) and a query which returns: (X,E,F) and you want to update the summary table fields C
and D with the values of E and F:
Summary: (1,2,3,0,0),(10,12,13,0,0) and query result: (1,4,5),(10,14,15) should result in the updated summary
table: (1,2,3,4,5,6),(10,11,12,13,14,15)
UPDATE summary SET C=(SELECT E FROM (query) q WHERE summary.X=q.X), D=(SELECT F FROM
(query) q WHERE summary.X=q.X)
### (@olditem1:=item1) will assign the value of item1 *before* the update.
2. multi-table UPDATE retrieving more than 1 row for every row to be updated, will perform only 1 update with
the first found value and wont send any message about following skipped values (I don't know if it should be
called an error)
3. first work-around (+quick -secure): be sure that the joined tables are ordered to offer as first the correct value
4. second work-around (-quick +secure): use a subselect for the value to be set [ x=(SELECT yy FROM ...
ORDER BY... LIMIT 1) ] as shown in the preceding example of James Goatcher (please note the use of LIMIT)
UPDATE dt_log AS t,
(
SELECT max(el_count)+1 as maxcount
FROM dt_log where dt_nameid IN ('1','2','3','4')
) AS h
SET t.dt_rej = h.maxcount
WHERE t.dt_edate = '0000-00-00 00:00:00'
AND t.dt_nameid IN ('1','2','3','4')
to update one table with aggregates from another, it took 3 seconds to do 50 records and about 57 seconds to
do 800 records.
UPDATE T1 INNER JOIN (SELECT COUNT(*) AS c FROM T2 GROUP BY f2) t USING(f2) SET f1=t.c
it did 76 records in 177 ms, and 350 records in 177 ms and 2800 records in 250 ms.
I believe that a correlated subquery is executed once for each result of the outer query, whereas a JOIN to a
non-correlated subquery executes the inner query only once.
UPDATE
table1
SET
column1 = (@v := column1), column1 = column2, column2 = @v;
The following will update a field (field9 which is empty) in TABLE1 with data from a field (field9) in TABLE3
using joins with TABLE2 and TABLE3. I have made up the WHERE & AND conditions to show this example.
UPDATE table1 t1
JOIN table2 t2 ON t1.field1 = t2.field1
JOIN table3 t3 ON (t3.field1=t2.field2 AND t3.field3 IS NOT NULL)
SET t1.field9=t3.field9
WHERE t1.field5=1
AND t1.field9 IS NULL