问题描述:

I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues running slow when I make changes making it difficult because there are over 200k+ entries. I tested another table which has very few rows and it moves quickly, so it's not the server or database itself but that particular table which has a tough time. I need all of the table's rows and cannot find a solution to get around the load issues.

DROP TABLE IF EXISTS `articles`;

/*!40101 SET @saved_cs_client = @@character_set_client */;

/*!40101 SET character_set_client = utf8 */;

CREATE TABLE `articles` (

`id` int(11) NOT NULL auto_increment,

`content` text NOT NULL,

`author` varchar(255) NOT NULL,

`alias` varchar(255) NOT NULL,

`topic` varchar(255) NOT NULL,

`subtopics` varchar(255) NOT NULL,

`keywords` text NOT NULL,

`submitdate` timestamp NOT NULL default CURRENT_TIMESTAMP,

`date` varchar(255) NOT NULL,

`day` varchar(255) NOT NULL,

`month` varchar(255) NOT NULL,

`year` varchar(255) NOT NULL,

`time` varchar(255) NOT NULL,

`ampm` varchar(255) NOT NULL,

`ip` varchar(255) NOT NULL,

`score_up` int(11) NOT NULL default '0',

`score_down` int(11) NOT NULL default '0',

`total_score` int(11) NOT NULL default '0',

`approved` varchar(255) NOT NULL,

`visible` varchar(255) NOT NULL,

`searchable` varchar(255) NOT NULL,

`addedby` varchar(255) NOT NULL,

`keyword_added` varchar(255) NOT NULL,

`topic_added` varchar(255) NOT NULL,

PRIMARY KEY (`id`),

KEY `score_up` (`score_up`),

KEY `score_down` (`score_down`),

FULLTEXT KEY `SEARCH` (`content `),

FULLTEXT KEY `asearch` (`author`),

FULLTEXT KEY `topic` (`topic`),

FULLTEXT KEY `keywords` (`content `,`keywords`,`topic`,`author`),

FULLTEXT KEY `content ` (`content `,`keywords`),

FULLTEXT KEY `new` (`keywords`),

FULLTEXT KEY `author` (`author`)

) ENGINE=MyISAM AUTO_INCREMENT=290823 DEFAULT CHARSET=latin1;

/*!40101 SET character_set_client = @saved_cs_client */;

网友答案:

With indexes it depends:

more indexes = faster selecting, slower inserting

less indexes = slower selecting, faster inserting

Because the index tables has to be rebuild when inserting and the more data in the table is the more work is for mysql to do to rebuild the index.

So maybe you could remove indexes you not need, that should speed your inserting up.

网友答案:

Another option is to partition you table into many - this stops the bottle neck.

网友答案:

Just try to pass the changes in an update script. This is slow because it creates tables. try updating the tables where changes has been made.

For example create a variable that catches all the changes in the program, with that, insert it to the tables query. That should be fast enough for programs. But as we all know speed depends on how much data is processed.

Let me know if you need anything else.

网友答案:

This may or may not help you directly, but I notice that you have a lot of VARCHAR(255) columns in your table. Some of them seem like they might be totally unnecessary — do you really need all those date / day / month / year / time / ampm columns? — and many could be replaced by more compact datatypes:

  • Dates could be stored as a DATETIME (or TIMESTAMP).
  • IP addresses could be stored as INTEGERs, or as BINARY(16) for IPv6.
  • Instead of storing usernames in the article table, you should create a separate user table and reference it using INTEGER keys.
  • I don't know what the approved, visible and searchable fields are, but I bet they don't need to be VARCHAR(255)s.

I'd also second Adrian Cornish's suggestion to split your table. In particular, you really want to keep frequently changing and frequently accessed metadata, such as up/down vote scores, separate from rarely changing and infrequently accessed bulk data like article content. See for example http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/

网友答案:

"I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues"

Try innodb on this table if you application performs A LOT update, insert concurrently there, row level locking $$$

网友答案:

I recommend you to split that "big table"(not that big actually, but for MySQL it may be) in several tables to make the most of the query cache. Any time you update some record in that table, the query cache is erased. Also you can try to reduce the isolation level, but that is a little more complicated.

相关阅读:
Top