대용량 (14GB) MySQL 덤프 파일을 새 MySQL 데이터베이스로 가져 오려면 어떻게해야합니까?
대용량 (14GB) MySQL 덤프 파일을 새 MySQL 데이터베이스로 가져 오려면 어떻게해야합니까?
나는 주변을 수색했고이 솔루션 만이 나를 도왔다.
mysql -u root -p
set global net_buffer_length=1000000; --Set network buffer length to a large byte number
set global max_allowed_packet=1000000000; --Set maximum allowed packet size to a large byte number
SET foreign_key_checks = 0; --Disable foreign key checking to avoid delays,errors and unwanted behaviour
source file.sql --Import your sql dump file
SET foreign_key_checks = 1; --Remember to enable foreign key checks when procedure is complete!
mysql
명령 줄 클라이언트를 직접 사용해 보셨습니까 ?
mysql -u username -p -h hostname databasename < dump.sql
그렇게 할 수 없다면, BigDump 와 같이 MySQL로 대용량 덤프를 가져 오는 데 도움이되는 인터넷 검색을 통해 찾을 수있는 유틸리티가 많이 있습니다 .
첫 번째 열기-> 명령 줄
cd..
cd..
f: -- xampp installed drive
cd xampp/mysql/bin
mysql -u root -p
set global net_buffer_length=1000000; --Set network buffer length to a large byte number
set global max_allowed_packet=1000000000; --Set maximum allowed packet size to a large byte number
SET foreign_key_checks = 0; --Disable foreign key checking to avoid delays,errors and unwanted behaviour
use DATABASE_NAME;
source G:\file.sql; --Import your sql dump file
SET foreign_key_checks = 1; --Remember to enable foreign key checks when procedure is complete!
나는 내가 본 몇 가지 응답에 내 발견을 게시하고 있는데, 내가 겪은 것을 언급하지 않았으며 견습 적으로 이것이 BigDump를 물리 칠 것이므로 확인하십시오.
Linux 명령 줄을 통해 500 메가 덤프를로드하려고했는데 "Mysql 서버가 사라졌습니다"오류가 계속 발생했습니다. my.conf의 설정이 도움이되지 않았습니다. 그것을 고치기 위해 밝혀진 것은 ... 나는 다음과 같은 큰 확장 삽입을하고있었습니다.
insert into table (fields) values (a record, a record, a record, 500 meg of data);
다음과 같이 별도의 삽입물로 파일을 형식화해야했습니다.
insert into table (fields) values (a record);
insert into table (fields) values (a record);
insert into table (fields) values (a record);
Etc.
그리고 덤프를 생성하기 위해 다음과 같은 것을 사용했으며 매력처럼 작동했습니다.
SELECT
id,
status,
email
FROM contacts
INTO OUTFILE '/tmp/contacts.sql'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES STARTING BY "INSERT INTO contacts (id,status,email) values ("
TERMINATED BY ');\n'
간단한 해결책은 다음 쿼리를 실행하는 것입니다. mysql -h yourhostname -u username -p databasename < yoursqlfile.sql
진행률 표시 줄과 함께 가져 오려면 다음을 시도하십시오. pv yoursqlfile.sql | mysql -uxxx -pxxxx databasename
최근 프로젝트에서 우리는 방대한 양의 데이터로 작업하고 조작하는 데 어려움을 겪었습니다. 클라이언트는 30MB에서 350MB까지의 50 개의 CSV 파일을 제공했으며 모두 약 2 천만 행의 데이터와 15 개의 데이터 열을 포함하고 있습니다. 우리의 최종 목표는 데이터를 MySQL 관계형 데이터베이스로 가져와 조작하여 우리가 개발 한 프런트 엔드 PHP 스크립트를 구동하는 데 사용하는 것이 었습니다. 이제 이렇게 크거나 더 큰 데이터 세트로 작업하는 것은 가장 간단한 작업이 아니며, 작업 할 때 이와 같은 대규모 데이터 세트로 작업 할 때 고려해야 할 사항과 알아야 할 사항을 공유하고자했습니다.
데이터 세트 사전 가져 오기 분석
이 첫 번째 단계를 충분히 강조 할 수 없습니다! 데이터를 가져 오기 전에 작업중인 데이터를 분석 할 시간이 있는지 확인하십시오. 모든 데이터가 나타내는 것이 무엇인지, 무엇과 관련된 열, 어떤 유형의 조작이 필요한지 이해하면 장기적으로 시간을 절약 할 수 있습니다.
LOAD DATA INFILE은 당신의 친구입니다
계속해서 PHPMyAdmin과 같은 도구를 통해 일반 CSV 삽입을 시도하면 우리가 작업 한 것과 같은 대용량 데이터 파일 (및 더 큰 파일)을 가져 오기가 어려울 수 있습니다. 업로드 크기 제한 및 서버 시간 초과로 인해 서버가 일부 데이터 파일만큼 큰 파일 업로드를 처리 할 수 없기 때문에 많은 경우 실패 할뿐만 아니라 성공하더라도 프로세스에 몇 시간이 걸릴 수 있습니다. 하드웨어에 따라 다릅니다. SQL 함수 LOAD DATA INFILE은 이러한 대규모 데이터 세트를 처리하기 위해 생성되었으며 가져 오기 프로세스를 처리하는 데 걸리는 시간을 크게 줄입니다. 참고로 이것은 PHPMyAdmin을 통해 실행할 수 있지만 파일 업로드 문제가 여전히있을 수 있습니다.
LOAD DATA INFILE '/mylargefile.csv' INTO TABLE temp_data FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
MYISAM 대 InnoDB
크거나 작은 데이터베이스 프로젝트에 사용할 데이터베이스 엔진을 고려하는 데 시간이 조금 걸리는 것이 좋습니다. 여러분이 읽을 두 가지 주요 엔진은 MYISAM과 InnoDB이며 각각 고유 한 장점과 단점이 있습니다. 간략히 고려해야 할 사항 (일반적으로)은 다음과 같습니다.
MYISAM
- 메모리 사용량 감소
- 전체 텍스트 검색 허용
- 테이블 레벨 잠금 – 쓰기시 전체 테이블 잠금
- 읽기 집약적 인 애플리케이션에 적합
InnoDB
- 목록 항목
- 더 많은 메모리 사용
- 전체 텍스트 검색 지원 없음
- 더 빠른 성능
- 행 수준 잠금 – 쓰기시 단일 행 잠금
- 읽기 / 쓰기 집약적 인 애플리케이션에 적합
신중한 디자인 계획
MySQL AnalyzeYour databases design/structure is going to be a large factor in how it performs. Take your time when it comes to planning out the different fields and analyze the data to figure out what the best field types, defaults and field length. You want to accommodate for the right amounts of data and try to avoid varchar columns and overly large data types when the data doesn’t warrant it. As an additional step after you are done with your database, you make want to see what MySQL suggests as field types for all of your different fields. You can do this by executing the following SQL command:
ANALYZE TABLE my_big_table
The result will be a description of each columns information along with a recommendation for what type of datatype it should be along with a proper length. Now you don’t necessarily need to follow the recommendations as they are based solely on existing data, but it may help put you on the right track and get you thinking
To Index or Not to Index
For a dataset as large as this it’s infinitely important to create proper indexes on your data based off of what you need to do with the data on the front-end, BUT if you plan to manipulate the data beforehand refrain from placing too many indexes on the data. Not only will it will make your SQL table larger, but it will also slow down certain operations like column additions, subtractions and additional indexing. With our dataset we needed to take the information we just imported and break it into several different tables to create a relational structure as well as take certain columns and split the information into additional columns. We placed an index on the bare minimum of columns that we knew would help us with the manipulation. All in all, we took 1 large table consisting of 20 million rows of data and split its information into 6 different tables with pieces of the main data in them along with newly created data based off the existing content. We did all of this by writing small PHP scripts to parse and move the data around.
Finding a Balance
A big part of working with large databases from a programming perspective is speed and efficiency. Getting all of the data into your database is great, but if the script you write to access the data is slow, what’s the point? When working with large datasets it’s extremely important that you take the time to understand all of the queries that your script is performing and to create indexes to help those queries where possible. One such way to analyze what your queries are doing is by executing the following SQL command:
EXPLAIN SELECT some_field FROM my_big_table WHERE another_field='MyCustomField';
By adding EXPLAIN to the start of your query MySQL will spit out information describing what indexes it tried to use, did use and how it used them. I labeled this point ‘Finding a balance’ because although indexes can help your script perform faster, they can just as easily make it run slower. You need to make sure you index what is needed and only what is needed. Every index consumes disk space and adds to the overhead of the table. Every time you make an edit to your table, you have to rebuild the index for that particular row and the more indexes you have on those rows, the longer it will take. It all comes down to making smart indexes, efficient SQL queries and most importantly benchmarking as you go to understand what each of your queries is doing and how long it’s taking to do it.
Index On, Index Off
As we worked on the database and front-end script, both the client and us started to notice little things that needed changing and that required us to make changes to the database. Some of these changes involved adding/removing columns and changing the column types. As we had already setup a number of indexes on the data, making any of these changes required the server to do some serious work to keep the indexes in place and handle any modifications. On our small VPS server, some of the changes were taking upwards of 6 hours to complete…certainly not helpful to us being able to do speedy development. The solution? Turn off indexes! Sometimes it’s better to turn the indexes off, make your changes and then turn the indexes back on….especially if you have a lot of different changes to make. With the indexes off, the changes took a matter of seconds to minutes versus hours. When we were happy with our changes we simply turned our indexes back on. This of course took quite some time to re-index everything, but it was at least able to re-index everything all at once, reducing the overall time needed to make these changes one by one. Here’s how to do it:
- Disable Indexes:
ALTER TABLE my_big_table DISABLE KEY
- Enable Indexes:
ALTER TABLE my_big_table ENABLE KEY
- Disable Indexes:
Give MySQL a Tune-Up
Don’t neglect your server when it comes to making your database and script run quickly. Your hardware needs just as much attention and tuning as your database and script does. In particular it’s important to look at your MySQL configuration file to see what changes you can make to better enhance its performance. A great little tool that we’ve come across is the MySQL Tuner http://mysqltuner.com/ . It’s a quick little Perl script that you can download right to your server and run via SSH to see what changes you might want to make to your configuration. Note that you should actively use your front-end script and database for several days before running the tuner so that the tuner has data to analyze. Running it on a fresh server will only provide minimal information and tuning options. We found it great to use the tuner script every few days for the two weeks to see what recommendations it would come up with and at the end we had significantly increased the databases performance.
Don’t be Afraid to Ask
Working with SQL can be challenging to begin with and working with extremely large datasets only makes it that much harder. Don’t be afraid to go to professionals who know what they are doing when it comes to large datasets. Ultimately you will end up with a superior product, quicker development and quicker front-end performance. When it comes to large databases sometimes it’s take a professionals experienced eyes to find all the little caveats that could be slowing your databases performance.
For Windows, I use Navicat Premium. It allows you to transfer database objects from one database to another, or to a sql file. The target database can be on the same server as the source or on another server.
Navicat Online Manual for windows
Use source command to import large DB
mysql -u username -p
> source sqldbfile.sql
this can import any large DB
according to mysql documentation none of these works! People pay attention! so we will upload test.sql into the test_db type this into the shell:
mysql --user=user_name --password=yourpassword test_db < d:/test.sql
This works for sure!
Thanks.
navigate to C:\wamp64\alias\phpmyadmin.conf and change from:
php_admin_value upload_max_filesize 128M
php_admin_value post_max_size 128M
to
php_admin_value upload_max_filesize 2048M
php_admin_value post_max_size 2048M
or more :)
You need
- Bigdump script bigdump.php from the download
- Dump file of your database created by phpMyAdmin or other tool, lets call it dump.sql. You can also use GZip compressed dump files, lets call it dump.gz.
- Access account for your mySQL database
- Access account for some web server with PHP installed. This web server must be able to connect to the mySQL database. This ability is probably present if your web server and the mySQL server are from the same ISP.
- Some good text editor like Notepad++ to edit the configuration file.
- Some FTP client to upload the files to the web server.
- Common knowledge about files, PHP, mySQL databases, phpMyAdmin, FTP and HTTP
ReferenceURL : https://stackoverflow.com/questions/13717277/how-can-i-import-a-large-14-gb-mysql-dump-file-into-a-new-mysql-database
'code' 카테고리의 다른 글
self.tableView.reloadData ()가 Swift에서 작동하지 않습니다. (0) | 2021.01.09 |
---|---|
Android-알림 빌드, TaskStackBuilder.addParentStack이 작동하지 않음 (0) | 2021.01.09 |
GDB를 프로세스에 연결하려고 할 때 "ptrace 작업이 허용되지 않음"을 해결하는 방법은 무엇입니까? (0) | 2021.01.09 |
ObserveOn 및 SubscribeOn-작업이 수행되는 위치 (0) | 2021.01.09 |
WIFI가 연결되면 SSID 받기 (0) | 2021.01.09 |