Database testing. Open source tools for database testing Difficulties in database testing

Laravel provides many useful tools for testing your database applications. First, you can use the PHP helper method seeInDatabase() to check that the data in the database meets a certain set of criteria. For example, if you want to check that the users table has a record with the email field equal to [email protected], you can do the following:

PHP
{
// Making an application call...

$this -> seeInDatabase("users" , [
"email" => " [email protected]"
]);
}

Of course, methods such as PHP seeInDatabase() created for convenience. You can use any of PHPUnit's built-in validation methods to augment your tests.

Resetting the database after each test

It is often useful to reset your database after each test so that data from the previous test does not affect subsequent tests.

Using Migrations

One way to reset the database state is to roll back the database after each test and migrate it before the next test. Laravel provides a simple DatabaseMigrations trait that will automatically do this for you. Just use this trait on your test class and everything will be done for you:

PHP




{
use DatabaseMigrations ;

/**
*
* @return void
*/

{
$this -> visit ("/" )
-> see ("Laravel 5" );
}
}

Using Transactions

Another way to reset the DB state is to wrap each test case in a DB transaction. And for this, Laravel also provides a handy DatabaseTransactions trait that will automatically do this for you:

PHP

Use Illuminate\Foundation\Testing\WithoutMiddleware;
use Illuminate\Foundation\Testing\DatabaseMigrations;
use Illuminate\Foundation\Testing\DatabaseTransactions;

class ExampleTest extends TestCase
{
use DatabaseTransactions ;

/**
* Example of a basic functional test.
*
* @return void
*/
public function testBasicExample()
{
$this -> visit ("/" )
-> see ("Laravel 5" );
}
}

By default, this trait will only wrap the default database connection in a transaction. If your application uses multiple database connections, you need to define the PHP property $connectionsToTransact in your test class. This property should be an array of connection names to perform transactions on.

Creation of factories

When testing, you may need to insert multiple records into your DB before running the test. When creating this data, instead of manually specifying the values ​​of each column, Laravel allows you to define a standard set of attributes for each of your Eloquent models using factories. First, look at the database/factories/ModelFactory.php file in your application. Initially, this file contains the definition of one factory:

PHP $factory -> define (App \ User ::class, function (Faker \ Generator $faker ) (
static $password ;

Return [
"name" => $faker -> name ,
"email" => $faker -> unique()-> safeEmail ,
"password" => $password ?: $password = bcrypt ("secret") ,
"remember_token" => str_random (10 ),
];
});

In a closure that serves as a factory definition, you can return standard test values ​​for all model attributes. The closure will receive an instance of the Faker PHP library, which allows you to conveniently generate various random data for testing.

Of course, you can add your own additional factories in the ModelFactory.php file. You can also create additional factory files for each model for clearer organization. For example, you can create UserFactory.php and CommentFactory.php files in your database/factories folder. All files in the factories folder will be automatically downloaded by Laravel.

Factory states

States allow you to define individual changes that can be applied to your model factories in any combination. For example, your User model might have a delinquent state that changes the default value of one of the attributes. You can define your state transformations using the PHP method state() :

PHP $factory -> state (App \ User ::class, "delinquent" , function ($faker ) (
return [
"account_status" => "delinquent" ,
];
});

Using factories

Creating Models

Once the factories are defined, you can use the PHP global function factory() in your tests or seed files to generate model instances. So let's look at some examples of creating models. Firstly, we use the PHP method make() to create models, but will not save them in the database:

PHP public function testDatabase()
{
$user = factory (App\User::class)->make();

You can also create a collection of models or create models of a specific type:

PHP $users = factory (App\User::class, 3)->make();

You can also apply any to models. If you want to apply multiple state changes to models, you need to specify the name of each state to apply:

PHP $users = factory (App \ User ::class, 5 )-> states ( "delinquent" ) -> make ();

$users = factory (App \ User ::class, 5 ) -> states ( "premium" , "delinquent" ) -> make ();

Attribute Override

If you want to override some of the default values ​​of your models, you can pass an array of values ​​to the PHP method make(). Only the specified values ​​will be replaced, and the rest will be set as specified in the factory:

PHP $user = factory (App\User::class)->make ([
"name" => "Abigail" ,
]);

Permanent models

PHP method create() not only creates model instances, but also stores them in the database using the Eloquent PHP method save() :

PHP public function testDatabase()
{
// Create one instance of App\User...
$user = factory (App\User::class)->create();

// Create three instances of App\User...
$users = factory (App\User::class, 3)->create();

// Using the model in tests...
}

You can override model attributes by passing an array to a PHP method create():PHP make());
});

Closure Relationships and Attributes

You can also attach relationships to models using closure attributes in your factory definitions. For example, if you wanted to create a new instance of the User model when creating a Post, you could do this:

PHP $factory ->
return [
"title" => $faker -> title ,
"content" => $faker -> paragraph ,
"user_id" => function() (
return factory (App\User::class)->create()->id;
}
];
});

This closure also receives a specific array of attributes from the factory that contains it:

PHP $factory -> define (App \ Post ::class, function ($faker ) (
return [
"title" => $faker -> title ,
"content" => $faker -> paragraph ,
"user_id" => function() (
return factory (App\User::class)->create()->id;
},
"user_type" => function (array $post ) (
return App\User::find($post["user_id"])->type;
}
];
});

: How to test and debug databases

Automatic unit testing of application code is simple and straightforward. How to test a database? Or an application that works with a database. After all, a database is not just program code, a database is an object that saves its state. And if we start changing the data in the database during testing (and without this, what kind of testing will we have?!), then after each test the database will change. This may interfere with subsequent tests and permanently corrupt the database.

The key to solving the problem is transactions. One of the features of this mechanism is that as long as the transaction is not completed, you can always undo all changes and return the database to the state at the time the transaction began.

The algorithm is like this:

  1. open a transaction;
  2. if necessary, we carry out preparatory steps for testing;
  3. perform a unit test (or simply run the script whose operation we want to check);
  4. check the result of the script;
  5. We cancel the transaction, returning the database to its original state.

Even if there are unclosed transactions in the code under test, the external ROLLBACK will still roll back all changes correctly.

It's good if we need to test a SQL script or stored procedure. What if we are testing an application that itself connects to the database, opening a new connection? In addition, if we are debugging, then we will probably want to look at the database through the eyes of the application being debugged. What to do in this case?

Don't rush to create distributed transactions, there is a simpler solution! Using standard SQL server tools, you can open a transaction on one connection and continue it on another.

To do this, you need to connect to the server, open a transaction, obtain a token for that transaction, and then pass this token to the application under test. It will join our transaction in its session and from that moment on, in our debugging session we will see the data (and also feel the locks) exactly as the application under test sees it.

The sequence of actions is as follows:

Having started a transaction in a debug session, we must find out its identifier. This is a unique string by which the server distinguishes transactions. This identifier must somehow be passed to the application under test.

Now the application’s task is to bind to our control transaction before it starts doing what it’s supposed to do.

Then the application starts working, including running its stored procedures, opening its transactions, changing the isolation mode... But our debugging session will all this time be inside the same transaction as the application.

Let's say an application locks a table and starts changing its contents. At this moment, no other connections can look into the locked table. But not our debugging session! From there we can look at the database in the same way as the application does, since the SQL server believes that we are in the same transaction.

While for all other sessions the application's actions are hidden by locks...

Our debugging session passes through the locks (the server thinks they are our own locks)!

Or imagine that the application starts working with its own versions of strings in SNAPSHOT mode. How can I look into these versions? Even this is possible if you are connected by a common transaction!

Don't forget to roll back the control transaction at the end of this exciting process. This can be done both from the debugging session (if the testing process completes normally) and from the application itself (if something unexpected happens in it).

You can learn more about this in the courses

Database testing not as common as testing other parts of the application. In some tests database generally get wet. In this article I will try to look at tools for testing relational and NoSQL databases.

This situation is due to the fact that many databases are commercial and the entire necessary set of tools for working with them is supplied by the organization that developed the database. However, the growing popularity of NoSQL and various MySQL forks in the future may change this state of affairs.

Database Benchmark

Database Benchmark is a .NET tool designed to stress test databases with large data streams. The application runs two main test scenarios: inserting a large number of randomly generated records with sequential or random keys, and reading inserted records ordered by their keys. It has extensive capabilities for generating data, graphical reports and configuring possible types of reports.

Supported databases: MySQL, SQL Server, PostgreSQL, MongoDB and many others.

Database Rider

Database Rider is designed to allow testing there was a database was no more difficult than unit testing. This tool is based on Arquillian and therefore the Java project only needs a dependency for DBUnit. It is also possible to use annotations, as inJUnit, integration with CDI via interceptors, support for JSON, YAML, XML, XLS and CSV, configuration via the same annotations or yml files, integration with Cucumber, support for multiple databases, working with temporary types in datasets.

DbFit

DbFit - development framework Database through testing. It's written on top FitNesse, which is a mature and powerful tool with a large community. Tests are written based on tables, which makes them more readable than regular ones unit - tests. You can run them from the IDE, using the command line, or using CI tools.

Supported databases: Oracle, SQL Server, MySQL, DB2, PostgreSQL, HSQLDB and Derby.

dbstress

dbstress is a database performance and stress testing tool written in Scala and Akka. Using a special JDBC-driver, it executes requests in parallel a certain number of times (possibly even to several hosts) and saves the final result in csv file.

Supported databases: all the same as in JDBC.

DbUnit

is an extension JUnit (also used with Ant), which between tests can return database to the desired state. This feature allows you to avoid dependencies between tests; if one test fails and at the same time violates the database, then the next test will start from scratch. DbUnit has the ability to transfer data between a database and an XML document. It is also possible to work with large datasets in streaming mode. You can also check whether the resulting database matches a certain standard.

Supported databases: all the same as in JDBC.

DB Test Driven

DB Test Driven is a tool for unit - testing Database. This utility is very lightweight, works with native SQL and is installed directly into the database. Easily integrates with continuous integration tools, and the SQL Server version has the ability to evaluate code coverage by tests.

Supported databases: SQL Server, Oracle.

HammerDB

HammerDB - load and benchmark testing tool Database. This is an automated application that is also multi-threaded and has the ability to use dynamic scripts. Open source jav and comparison tool. It is automated, multi-threaded and extensible with support for dynamic scripts.

JdbcSlim

JdbcSlim offers easy integration of queries and commands into Slim FitNesse. The project's focus is on storing a range of configurations, test data and SQL. This ensures that the requirements are written independent of the implementation and are understandable to business users. JdbcSlim does not have database-specific code. He is agnostic about the specifics of a database system and has no specific code for any database system. In the framework itself, everything is described at a high level; the implementation of any specific things occurs by changing just one class.

Supported databases: Oracle, SQL Server, PostgreSQL, MySQL and others.

JDBDT (Java DataBase Delta Testing)

JDBDT is a Java library for testing (SQL-based) database applications. The library is designed for automated installation and testing of the database tests. JDBDT has no dependencies on third-party libraries, which simplifies its integration. Compared to existing database testing libraries, JDBDT is conceptually different in its ability to use δ-assertions.

Supported databases: PostgreSQL, MySQL, SQLite, Apache Derby, H2 and HSQLDB.

NBi

NBi is essentially an addon for NUnit and is intended more for Business Intelligence spheres. In addition to working with relational databases, it is possible to work with OLAP platforms (Analysis Services, Mondrian, etc.), ETL And reporting systems(Microsoft technologies). The main goal of this framework is to create tests using a declarative approach based on XML. You won't need to write tests in C# or use Visual Studio to compile tests. You just need to create an xml file and interpret it using NBi, then you can run the tests. In addition to NUnit, it can be ported to other test frameworks.

Supported databases: SQL Server, MySQL, PostgreSQL, Neo4j, MongoDB, DocumentDB and others.

NoSQLMap

NoSQLMap is written in Python to perform robustness auditing sql - injections and various exploits in the database configuration. And also to assess the resistance of a web application using NoSQL databases to this type of attack. The main goals of the application are to provide a tool for testing MongoDB servers and dispelling the myth that NoSQL applications are impenetrable to SQL injection.

Supported databases: MongoDB.

NoSQLUnit

NoSQLUnit is an extension for JUnit designed for writing tests in Java applications that use NoSQL databases. Target NoSQLUnit— manage the NoSQL lifecycle. This tool will help you maintain the databases you are testing in a known state and standardize the way you write tests for applications using NoSQL.

Supported databases: MongoDB, Cassandra, HBase, Redis and Neo4j.

ruby-plsql-spec

ruby-plsql-spec framework for unit testing PL/SQL using Ruby. It is based on two other libraries:

  • ruby-plsql – Ruby API for calling PL/SQL procedures;
  • RSpec is a framework for BDD.

Supported databases: Oracle

SeLite

SeLite is an extension from the Selenium family. The main idea is to have a database based on SQLite, isolated from the application. You will be able to detect web server errors and fumble scripts between tests, work with snapshots, etc.

Supported databases: SQLite, MySQL, PostgreSQL.

sqlmap

sqlmap is a penetration testing tool that can automate the process of detecting and exploiting SQL injections and taking over database servers. It is equipped with a powerful detection engine and many niche pentesting features.

Supported databases: MySQL, Oracle, PostgreSQL, SQL Server, DB2 and others.

    Open source tools for database testing

    https://site/wp-content/uploads/2018/01/data-base-testing-150x150.png

    Database testing is not as common as testing other parts of the application. In some tests, the database is completely mocked. In this article I will try to analyze tools for testing relational and NoSQL databases. This situation is due to the fact that many databases are commercial and the entire necessary set of tools for working with them is supplied by an organization that […]

1) Goals and objectives…………………………………………………………... 3

2) Description of the database …………………………………………………... 4

3) Working with the database …………………………………………………… 6

4) Load testing of the database……………………………...11

5) Conclusion……………………………………………………………………....15

6) Literature………………………………………………………………....16

Goals and objectives

Target: create a database of elixirs for the game The Witcher 3, which will contain information about the type of elixirs, their properties, what they are made from, places where they can be found and about the monsters against which they can be used. Create optimized queries for this database and load test it.

Tasks:

· Create a database schema with at least 5 entities in MYSQL Workbench. Describe these entities and their connections.

· Describe the use of this database, describe the main queries, look at their execution time and draw conclusions

· Database optimization

· Perform load testing using apache-jmeter. Use extensions for it to build graphs.

Database Description

The course work uses the created Witcher1 database, the main entities of which are tables:

Fig.1 Schematic representation of the Witcher1 database

The Ingridients table contains the necessary ingredients to create elixirs in the game, which are described in the Elixirs table. Several ingredients are used to create an elixir, but each one is unique to its elixir. It is for this reason that a 1: n (one-to-many) relationship was established between these tables, which is shown in the database diagram (Fig. 1).

The Ingridients table also contains information about the names of the ingredients (Discription) and where this ingredient can be found (WhereFind). The idElixirs column is a linking column for the Ingridients and Elixirs tables.

The Elixirs table contains information on how to use a specific elixir and the name of that elixir. This table is the key table for the other tables.

The Locations table contains information about which location or city a specific ingredient can be found in.

Table IL contains consolidated information about where and how to find a specific ingredient in a given area and what it is. An n:m (many to many) relationship was established between the Ingridients and Locations tables, since multiple ingredients can be found in multiple locations, as indicated in the IL child table.

The Monsters table contains information about the types of monsters in

“Witcher 3”, about how to recognize this or that monster and the names characteristic of them.

The ML table is a child table of the n: m union of the Location and Monsters tables and contains specific information about how to defeat this particular monster and what elixirs can be used for this, including special witcher signs, as well as in what area and What signs should you use to look for this particular type of monster?

Working with the database

The Witcher1 database contains information about which elixirs should be used against which monsters, special tactics for especially dangerous monsters, such as: Pestilence Maiden, Devil, Imp, Goblin, etc. Analyzing information from each table in order will take a lot of time, so we will create special queries to the database that will be as useful as possible for the user.

· A request for information on how to find a specific monster.

This query will contain the keyword JOIN, thanks to which the ML and Monsters tables of the Witcher1 database will be accessed.

This request will look like this:

ml JOIN monsters ON monsters.idMonsters=ml.idMonsters;

After executing the query, we will receive a fairly large table as output, which is the result of combining two tables. So that the displayed table is not so huge, you can specify which monster to display information about. That is, for, for example, Him, the request will look like this:

monsters.MonstersName, monsters.MonstersDiscription,

ml.DiscriptionHowFind, ml.idLocations

ml JOIN monsters ON monsters.idMonsters=ml.idMonsters

where monsters.MonstersName=’Hym’;

Which monster this or that ID corresponds to can be found out by querying the Monsters or ML tables. The queries will look like this:

SELECT SELECT

IdMonsters, MonstersName idMonsters, MonstersName

FROM ml; FROM monsters;

To check compliance, you can query both the ML and Monsters tables, first joining them by idMonsters.

ml.idMonsters, monsters.MonstersName

ml JOIN monsters ON

ml.idMonsters=monsters.idMonsters

ORDER BY monsters.idMonsters;

· A request for what elixir is suitable for this monster.

To implement this request, a JOIN will be used. The query will be addressed to two tables Elixirs and Monsters and will contain information about when and what elixir to drink in the fight against the monster:

monsters.MonstersName ,elixirs.ElixirName, elixirs.ElixirDiscription

elixirs JOIN monsters ON

elixirs.idElixirs=monsters.idElixirs;

· A query about what ingredient is found in a particular area.

To implement this request, a JOIN will be used. The query will be addressed to two tables Ingridients and Locations and will contain information about which ingredient is located in which location and information about its type:

ingridients.Discription, locations.Discription, ingridients.WhereFind

ingredients JOIN locations ON

ingridients.idIngridients=locations.idIngridients

ORDER BY ingredients.Discription;

· UPDATE requests

We implement this query for a monster in the Monsters table named Hym. Let's say we want to change his name to Him:

monsters

SET MonstersName="Him"

where idMonsters=1;

But, since Hym is correct in the English version, let’s return everything back:

monsters

SET MonstersName="Hym"

where idMonsters=1;

Fig.2. Implementing UPDATE queries

· "Aggregation" queries. COUNT and COUNT(DISTINCT)

The COUNT function counts the number of non-empty rows (without NULL inside them) in a given table. COUNT has an optimized version for displaying the number of rows when used for 1 table. For example:

Fig.3. Count rows in the Elixirs, Monsters, and Monsters JOIN elixirs tables.

The COUNT(DISTINCT) function is used to display the number of non-repeating rows in tables and is a more optimized version of the COUNT family of functions:

Fig.4. Counting non-repeating elixirs in the Monsters table.

· DELETE function.

Let's add another row to the Elixirs table using INSERT:

INSERT INTO elixirs VALUES (6,'ForDelete','DiscriptionDelete');

Fig.5. Adding a row to the Elixirs table.

Now we’ll make a request to delete this line, since there is no need for an elixir that will not help in any way in the fight against monsters:

DELETE FROM elixirs WHERE idElixirs=6;

Fig.6. Delete the added line.

Database Load Testing

Now that queries have been completed and access to the database has been established, it can be tested using several parameters:

· Response Times Over Time or Response Times over Time – this check displays information for each request about its average response time in milliseconds.

· Response Times Distribution or Response Time Distribution - this check displays the number of responses in a certain time interval during which the request was executed.

· Response Time Percentiles – This check displays percentiles for response time values. On the graph, the X axis will be percentages, and the Y axis will be the response time.

For the most plausible tests, we will set certain

options:

Fig.7. Test parameters

Number of Threads(users) – Number of virtual users. In our case, we will set it to 1000 in order to load our database as much as possible.

Ramp-Up Period – the period during which all users will be involved.

We will check all JOIN requests for their performance when activated simultaneously by several users.

The last 3 points are plotters of the checks by which we will test the database.

·
Checking Response Times Over Time

Fig.7. The result of executing queries during the test Response Times Over Time

As can be seen from the graph, the most difficult request to complete was “Monsters&Locations” and required the longest response time. You can verify the reason for the long execution of the request by running the request in the console. The main reason for this delay is that both the Monsters table and the ML table contain lengthy explanations of monsters or where to find them. Because of this, the request takes quite a long time to complete.

·
Examination Response Times Distribution

Fig.8. The result of executing queries during the test Response Times Distribution.

From this graph we can conclude that the number of responses for each of our requests in the same period of time is the same.

·
Checking Response Time Percentiles

The ordinate axis shows the execution time, and the abscissa axis shows percentages of the total quantity. Based on the graph, we can conclude that 90% of requests are executed in the time interval from 0 to 340 milliseconds, and from 5% to 15% the number of requests increases linearly, and then exponentially with a very small coefficient of increase.

The remaining 10% is executed in the time interval from 340 milliseconds to 700 milliseconds, which leads to the conclusion that there is a very large load on the database.

Conclusion

In this course work, all tasks were completed. The database was designed, filled with data, the main possibilities of its use in the form of queries and their execution results were shown.

At the end, testing and analysis of its results were carried out, with subsequent conclusions.

It should be noted that the database itself was created as only an educational one, so it is not so voluminous.

Another important characteristic is security: passwords, if such a table is created, must be stored in encrypted form and protected from unauthorized access.

Literature

1. http://phpclub.ru/mysql/doc/- Internet resource “MySQL - reference guide”

2. Schwartz B., Zaitsev P., Tkachenko V. et al. - MySQL. Optimizing Performance (2nd Edition)

3. Thalmann L., Kindal M., Bell C. – “Ensuring high availability of MySQL-based systems”

4. Andrzej Sapkowski – “The Witcher (large collection)”, Number of pages: 571

5. CD PROJECT RED, GOG COM. "The Witcher 3: Wild Hunt."


Related information.


Database testing is necessary to ensure the functionality of the database. To do this, queries are drawn up in the database of various types: for sampling, with calculated fields, parametric, with data grouping, for updating, for deleting.

Example query: Display a list of books taken by a specific student. Set your full name as a parameter.

Example query: Display a list of books by a specific author indicating storage locations in the library. Set the author's full name as a parameter.

Example request: Determine by library card number which class the corresponding student is in and who his class teacher is.

Rice. 15. Query 3. “Find a student by library card number and determine in which class he is studying”

Example request: Determine by Student_Full name in which class the debtor is studying and who his class teacher is.

For the convenience of working with records of various tables, it was created, with which you can open any table necessary to view, update and change information. The button form is shown in Fig. 17.

Rice. 17. Database button form

CONCLUSION

The final qualifying work was carried out on the current topic “Development of an information system for a rural school library.”

The goal of the diploma design to develop an information system for the school library of the Saratov region, Fedorovsky district of the municipal educational institution secondary school in the village of Solnechny has been achieved.

During the graduation project the following tasks were solved:

Consider the library as an element of the educational environment;

Study the government concept of supporting and developing children's reading;

Technologies of work of libraries of educational institutions are analyzed;

The subject area is described based on the survey;

-a technical specification for the development of an information system for a rural school library was developed;

- a functional model of the school library’s activities was built;

- description of input and output information flows;

an information system based on a DBMS has been developedACCess;

- the developed relational database was tested.

In the final qualifying work for the construction of an information system that provides automation of manual operations to ensure the processes of storage, search, accounting for issuance and return by students, a technical specification was developed based on an analysis of the results of a survey of the subject area. The technical specifications (TOR) reflected the requirements of the system users – library workers.

Based on the technical specifications, a functional model of the activities of a rural school library has been developed. The functional model, in turn, served as material for identifying non-automated areas in the library’s work.

The choice of a DBMS as a development environment was determined by the technical capabilities of the rural library. As a result, based on the DBMS AccessThe core of the information system – the database – was built.

For the convenience of users, a push-button interface has been developed.

Corresponding queries have been developed to test the database. Completing these queries allows us to judge the normal performance of the information system for a rural school library.

BIBLIOGRAPHICAL LIST