Why is 1s 8.3 so slow. Brakes on the file base - how to avoid (from recent experience). Working with non-standard or modified versions

The 1C system is one of the main tools for running small and medium-sized businesses today. As a rule, all employees of the organization have access to the program. Thus, if 1C starts to slow down or work slowly, then this significantly affects the conduct of business. Consider how you can speed up and optimize work in 1C on your own.


Optimization with 1C update

New versions of 1C always work more successfully and quickly, so be sure to follow the updates. Accounting is recommended to be updated as often as possible. Especially when there are versions of regulated reporting.

Many have long used the ability to automatically update the program. Although this issue is easily solved manually for 1s Enterprise 8.3, updating which will not cause trouble.

The first step is to download the latest version of the platform that is currently in use. This is done either using the ITS disk or through the web interface, where they provide ongoing support for users of a program such as 1s Enterprise 8.3, the configuration update for which is also officially supplied.

In the latter case, the archive with update data is downloaded separately. Its unpacking takes place in any folder that is considered the most convenient for the user. After that, you need to run the .exe file. In the next window, simply click the "Next" button.

Another page will appear. On it, the user selects the path in which the installation is completed. But this step is recommended only for advanced owners. personal computer. The default functions are usually enough to solve most problems. By default, in this case, one folder is specified, where all updates are installed at once. This is much more convenient than when the final paths are different. We just click on the “Next” buttons several times in the 1s Enterprise 8.3 program, the configuration update of which should take place quickly.

Only the final button remains, which offers "Installation".

How to speed up 1C if the platform slows down

Most often, problems result from the fact that at one of the stages the concentration of attention of the performer decreases. Here it is important to choose the scheme of the update itself correctly, only in this case we will not encounter a problem when 1s freezes during the update.

Version 7.7 update

There are several types of configuration. Depending on this, the course of further actions is selected.

  • Typical - in this case, it is assumed that the update is also carried out for regulated reporting.
  • Typical industry configurations - in many ways resemble the previous options. It is important to read the instructions provided by the developer in advance. Otherwise, you won’t be able to figure out why 1s 8.3 crashes during the update.
  • Modified standard - the user always has the opportunity to modify the application himself so that it meets current needs. Another option for expanding functionality is the transition to new platforms. For example, the 8th version.

About version 8.0 and 8.1

Platform 8.0 is currently being withdrawn from support. New generic designs will only work when used latest versions. It is only necessary not to forget that all intermediate releases are passed without fail. Otherwise, there is a high probability of simply losing information. Or run into a situation where 1s freezes when updating the configuration.

It is possible that a new standard configuration is introduced, and then the remains from the old infobases are transferred to it.

As for version 8.1, there are several ways to upgrade to it:

  1. manually;
  2. in automatic mode;
  3. appeal to specialists of companies providing services in this area.

Working with non-standard or modified versions

Initially, any configuration refers to typical developments. It ceases to be such if certain changes are made at the enterprise. For example, during installation. There are two classes that stand out from atypical configurations:

  1. changed;
  2. created from scratch, taking into account the needs of a particular enterprise.

Sometimes a second-class configuration is actively distributed to users. Then it belongs to the standard. It's just that the manufacturer is not considered 1C itself, but the company that created the new version.

The actuality of the configurations can be maintained by the following actions:

  • Error correction.
  • Functional expansion.
  • Improvement.
  • Change 1s 8.3, the configuration is not updated in case of service errors.

The installation process may take a different amount of time depending on the Internet speed you are currently using. In a separate window, the user chooses whether to update at the end of work, or immediately. With the latter option, you need to make sure that no one else works with the application. The process itself involves the use of exclusive mode inside the 1s Enterprise 8.3 application, the latest update is no exception.

  • It must be remembered that not all release versions may fit the current configuration.
  • If updates have not been made for a long time, you may have to download several files or archives at once.
  • In the list, it is easy to understand which version of 1s Enterprise 8.3 is needed, the update is selected by the user himself.

When the process ends, the Configurator itself can be closed. It is this mode that is most often used if an update is necessary. It is convenient, automates almost the entire process. The next time you run it for the first time, you may see a message stating that the platform is out of date. And that it is not recommended to use it at the moment.

Additional reasons for braking

If the program is updated correctly and without any errors, however, 1C still slows down, then the reason may be as follows:

  • Antivirus - if configured correctly, not a single antivirus will interfere with the system, however, if you use the factory settings, then 1C performance can decrease by 5–10%. You can optimize your antivirus with advanced settings, removing the background mode (if absolutely necessary).
  • Computer parameters - often insufficiently powerful computers lead to a strong decrease in 1C performance. Particular attention should be paid to the video card, operating system and processor.

Such methods will significantly optimize and speed up work in 1C for any company or enterprise, after which the performance of the program will increase significantly.

How to increase the speed and convenience of work in 1C

We often get questions about what slows down 1s, especially when switching to version 1s 8.3, thanks to our colleagues from Interface LLC, we tell in detail:

In our previous publications, we have already touched on the impact of the performance of the disk subsystem on the speed of 1C, however, this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file base over a network, where one of the user's PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently bypassed; in case of problems, it is usually advised to switch to client-server or terminal mode. And it has also become almost generally accepted that configurations on a managed application work much slower than usual ones. As a rule, arguments are given "iron": "here Accounting 2.0 just flew, and the" troika "is barely moving", of course, there is truth in these words, so let's try to figure it out.

Resource consumption at a glance

Before starting this study, we set ourselves two goals: to find out if managed application-based configurations are actually slower than conventional configurations, and which resources have the highest impact on performance.

For testing, we took two virtual machines under Windows control Server 2012 R2 and Windows 8.1, respectively, by allocating 2 cores of the host Core i5-4670 and 2 GB each random access memory, which is roughly equivalent to an average office machine. The server was placed on a RAID 0 array of two WD Se, and the client was placed on a similar array of general purpose disks.

As experimental bases, we have chosen several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were run on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the information base of the Troika, and it has grown significantly, as well as much greater appetites for RAM:

We are already ready to hear the usual: "what did they add to this trio", but let's not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about database maintenance. Also, employees of specialized firms serving (read - updating) these bases rarely think about it.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and fixing the infobase. Perhaps the name played a cruel joke, which seems to imply that this is a tool for troubleshooting, but poor performance is also a problem, and restructuring and reindexing, along with table compression, are well-known database optimization tools to any RDBMS administrator. Let's check?

After applying the selected actions, the database dramatically "lost weight", becoming even smaller than the "two", which no one has ever optimized either, and the RAM consumption also slightly decreased.

Subsequently, after loading new classifiers and directories, creating indices, etc. the size of the base will grow, in general, the bases of the "three" are larger than the bases of the "two". However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte already, and this value should be taken into account when planning the necessary resources to work with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially as 1C in file mode moving large amounts of data over the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbps equipment, so we started testing by comparing the performance indicators of 1C in 100 Mbps and 1 Gbps networks.

What happens when you start the 1C file base over the network? The client downloads a fairly large amount of information to temporary folders, especially if this is the first "cold" launch. At 100 Mbps, we expectedly run into the bandwidth and the download can take a long time, in our case, about 40 seconds (the price of the graph division is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. The transition to a gigabit network can significantly speed up the loading of the program, both "cold" and "hot", and the ratio of values ​​is observed. Therefore, we decided to express the result in relative terms, taking the highest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads twice as fast at any network speed, the transition from 100 Mbps to 1 Gbps allows you to speed up the download time by four times. Differences between the optimized and non-optimized "troika" bases in this mode not visible.

We also checked the impact of network speed on heavy-duty operation, for example, during group re-hosting. The result is also expressed in relative terms:

Here it is already more interesting, the optimized base of the "troika" in a 100 Mbit / s network works at the same speed as the "two", and the unoptimized one shows twice the worse result. On a gigabit, the ratios are preserved, the non-optimized "three" is also twice as slow as the "two", and the optimized one lags behind by a third. Also, the transition to 1 Gb / s allows you to reduce the execution time by a factor of three for version 2.0 and two times for version 3.0.

In order to evaluate the impact of network speed on daily work, we used performance measurement by performing a sequence of predefined actions in each database.

Actually, for everyday tasks, network bandwidth is not a bottleneck, an unoptimized "three" is only 20% slower than a two, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode affect. The transition to 1 Gb / s does not give the optimized base any advantages, and the non-optimized base and the deuce start to work faster, showing a small difference between them.

From the tests carried out, it becomes clear that the network is not a bottleneck for new configurations, and the managed application works even faster than usual. You can also recommend switching to 1 Gb/s if heavy tasks and database loading speed are critical for you, in other cases, new configurations allow you to work effectively even in slow 100 Mb/s networks.

So why does 1C slow down? We will investigate further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on SSD. Perhaps the performance of the server disk subsystem is not enough? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively high number of input / output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on it, we can assume that a mirror from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best answer to this question will help testing, which we conducted in a similar manner, network connection everywhere 1 Gbps, the result is also expressed in relative terms.

Let's start with the database loading speed.

It may seem surprising to someone, but the SSD base on the server does not affect the download speed of the database. The main limiting factor here, as shown by the previous test, is network throughput and client performance.

Let's move on to rewiring:

We have already noted above that the disk performance is quite enough even for heavy-duty operation, therefore, the speed of the SSD is also not affected, except for the unoptimized base, which caught up with the optimized one on the SSD. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

On everyday tasks, the picture is similar:

Only the non-optimized base receives the benefit from the SSD. Of course, you can purchase an SSD, but it would be much better to think about the timely maintenance of the bases. Also, don't forget about defragmenting the infobase partition on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of locally installed 1C in the previous article, much of what has been said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and scheduled tasks. In the figure below, you can see how Accounting 3.0 is quite actively accessing the disk for about 40 seconds after loading.

But at the same time, one should be aware that for a workstation where active work is performed with one or two information bases, the performance resources of a conventional HDD of a mass series are quite enough. Buying an SSD can speed up some processes, but you will not notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.

Slow HDD can slow down some operations, but by itself cannot cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when they were purchased. This is where the first problems lie in wait. Based on the fact that the average "troika" requires about 500 MB of memory, we can assume that the total amount of RAM of 1 GB to work with the program will not be enough.

We reduced the system memory to 1 GB and launched two infobases.

At first glance, everything is not so bad, the program has moderated its appetites and completely kept within the available memory, but let's not forget that the need for operational data has not changed, so where did they go? Flushed to disk, cache, swap, etc., the essence of this operation is that not needed in this moment data is sent from fast RAM, which is not enough, to slow disk.

Where it leads? Let's see how the system resources are used in heavy operations, for example, let's start a group rerun in two databases at once. First on a system with 2 GB of RAM:

As you can see, the system actively uses the network to receive data and the processor to process them, disk activity is insignificant, in the process of processing it occasionally grows, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard disk, the processor and the network are idle, waiting for the system to read the necessary data from disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable, directories and magazines opened with a significant delay and active disk access. For example, opening the Sales of goods and services magazine took about 20 seconds and was accompanied by high disk activity all this time (highlighted by a red line).

In order to objectively assess the impact of RAM on the performance of configurations based on a managed application, we conducted three measurements: the loading speed of the first base, the loading speed of the second base, and group reposting in one of the bases. Both bases are completely identical and created by copying the optimized base. The result is expressed in relative units.

The result speaks for itself, if the load time grows by about a third, which is still quite tolerable, then the time for performing operations in the database grows three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy right amount random access memory.

The lack of RAM is the main reason why working with new 1C configurations is uncomfortable. Minimum suitable configurations should be considered with 2 GB of memory on board. At the same time, keep in mind that in our case "greenhouse" conditions were created: a clean system, only 1C and the task manager were launched. In real life, a browser, an office suite, an antivirus, etc., are usually open on a working computer, so proceed from the need for 500 MB per database plus some margin so that during heavy operations you do not run into a lack of memory and drastic performance degradation.

CPU

The central processing unit, without exaggeration, can be called the heart of the computer, since it is he who ultimately processes all the calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of available virtual machine cores from two to one, while the test was run twice with memory sizes of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected, more powerful processor quite effectively took over the load in the face of a lack of resources, the rest of the time without giving any tangible advantages. 1C Enterprise can hardly be called an application that actively uses processor resources, rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional I/O operations, etc.

findings

So, why does 1C slow down? First of all, this is a lack of RAM, the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the "two" worked fine, and the "three" shamelessly slows down.

The second place should be given to network performance, a slow 100 Mbps channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of work even on slow channels.

Then you should pay attention to the disk one, buying an SSD is unlikely to be a good investment, but replacing the disk with a more modern one will not be superfluous. The difference between generations hard drives can be assessed in the following way: An overview of two inexpensive Western Digital Blue series drives 500 GB and 1 TB.

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance, unless this PC is used for heavy operations: batch processing, heavy reports, month closing, etc.

We hope this material will help you quickly understand the question of "why 1C slows down" and solve it most effectively and at no extra cost.

Recently, users and administrators have increasingly begun to complain that new 1C configurations developed on the basis of a managed application are slow, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more demanding on resources, but most users do not have an understanding of what primarily affects the operation of 1C in file mode. Let's try to fix this gap.

In ours, we have already touched on the impact of the performance of the disk subsystem on the speed of 1C, however, this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file base over a network, where one of the user's PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently bypassed; in case of problems, it is usually advised to switch to client-server or terminal mode. And it has also become almost generally accepted that configurations on a managed application work much slower than usual ones. As a rule, arguments are given "iron": "here Accounting 2.0 just flew, and the" troika "is barely moving, of course, there is some truth in these words, so let's try to figure it out.

Resource consumption at a glance

Before starting this study, we set ourselves two goals: to find out if managed application-based configurations are actually slower than conventional configurations, and which resources have the highest impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, allocating 2 cores of the host Core i5-4670 and 2 GB of RAM to them, which corresponds approximately to an average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we have chosen several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were run on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the information base of the Troika, and it has grown significantly, as well as much greater appetites for RAM:

We are already ready to hear the usual: "what did they add to this trio", but let's not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about database maintenance. Also, employees of specialized firms serving (read - updating) these bases rarely think about it.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and fixing the infobase. Perhaps the name played a cruel joke, which seems to imply that this is a tool for troubleshooting, but poor performance is also a problem, and restructuring and reindexing, along with table compression, are well-known database optimization tools to any RDBMS administrator. Let's check?

After applying the selected actions, the database dramatically "lost weight", becoming even smaller than the "two", which no one has ever optimized either, and the RAM consumption also slightly decreased.

Subsequently, after loading new classifiers and directories, creating indices, etc. the size of the base will grow, in general, the bases of the "three" are larger than the bases of the "two". However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte already, and this value should be taken into account when planning the necessary resources to work with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially as 1C in file mode, moving significant amounts of data over the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbps equipment, so we started testing by comparing the performance indicators of 1C in 100 Mbps and 1 Gbps networks.

What happens when you start the 1C file base over the network? The client downloads a fairly large amount of information to temporary folders, especially if this is the first "cold" launch. At 100 Mbps, we expectedly run into the bandwidth and the download can take a long time, in our case, about 40 seconds (the price of the graph division is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. The transition to a gigabit network can significantly speed up the loading of the program, both "cold" and "hot", and the ratio of values ​​is observed. Therefore, we decided to express the result in relative terms, taking the highest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads twice as fast at any network speed, the transition from 100 Mbps to 1 Gbps allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized Troika databases in this mode.

We also checked the impact of network speed on heavy-duty operation, for example, during group re-hosting. The result is also expressed in relative terms:

Here it is already more interesting, the optimized base of the "troika" in a 100 Mbit / s network works at the same speed as the "two", and the unoptimized one shows twice the worse result. On a gigabit, the ratios are preserved, the non-optimized "three" is also twice as slow as the "two", and the optimized one lags behind by a third. Also, the transition to 1 Gb / s allows you to reduce the execution time by a factor of three for version 2.0 and two times for version 3.0.

In order to evaluate the impact of network speed on daily work, we used performance measurement by performing a sequence of predefined actions in each database.

Actually, for everyday tasks, network bandwidth is not a bottleneck, an unoptimized "three" is only 20% slower than a two, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode affect. The transition to 1 Gb / s does not give the optimized base any advantages, and the non-optimized base and the deuce start to work faster, showing a small difference between them.

From the tests carried out, it becomes clear that the network is not a bottleneck for new configurations, and the managed application works even faster than usual. You can also recommend switching to 1 Gb/s if heavy tasks and database loading speed are critical for you, in other cases, new configurations allow you to work effectively even in slow 100 Mb/s networks.

So why does 1C slow down? We will investigate further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on SSD. Perhaps the performance of the server disk subsystem is not enough? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively high number of input / output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on it, we can assume that a mirror from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best answer to this question will help testing, which we conducted using a similar methodology, the network connection is 1 Gb / s everywhere, the result is also expressed in relative values.

Let's start with the database loading speed.

It may seem surprising to someone, but the SSD base on the server does not affect the download speed of the database. The main limiting factor here, as shown by the previous test, is network throughput and client performance.

Let's move on to rewiring:

We have already noted above that the disk performance is quite enough even for heavy-duty operation, therefore, the speed of the SSD is also not affected, except for the unoptimized base, which caught up with the optimized one on the SSD. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

On everyday tasks, the picture is similar:

Only the non-optimized base receives the benefit from the SSD. Of course, you can purchase an SSD, but it would be much better to think about the timely maintenance of the bases. Also, don't forget about defragmenting the infobase partition on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of locally installed 1C in , much of what has been said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and scheduled tasks. In the figure below, you can see how Accounting 3.0 is quite actively accessing the disk for about 40 seconds after loading.

But at the same time, one should be aware that for a workstation where active work is performed with one or two information bases, the performance resources of a conventional HDD of a mass series are quite enough. Buying an SSD can speed up some processes, but you will not notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.

A slow hard drive can slow down some operations, but it cannot by itself cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when they were purchased. This is where the first problems lie in wait. Based on the fact that the average "troika" requires about 500 MB of memory, we can assume that the total amount of RAM of 1 GB to work with the program will not be enough.

We reduced the system memory to 1 GB and launched two infobases.

At first glance, everything is not so bad, the program has moderated its appetites and completely kept within the available memory, but let's not forget that the need for operational data has not changed, so where did they go? Flushed to disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk.

Where it leads? Let's see how the system resources are used in heavy operations, for example, let's start a group rerun in two databases at once. First on a system with 2 GB of RAM:

As you can see, the system actively uses the network to receive data and the processor to process them, disk activity is insignificant, in the process of processing it occasionally grows, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard disk, the processor and the network are idle, waiting for the system to read the necessary data from disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable, directories and magazines opened with a significant delay and active disk access. For example, opening the Sales of goods and services magazine took about 20 seconds and was accompanied by high disk activity all this time (highlighted by a red line).

In order to objectively assess the impact of RAM on the performance of configurations based on a managed application, we conducted three measurements: the loading speed of the first base, the loading speed of the second base, and group reposting in one of the bases. Both bases are completely identical and created by copying the optimized base. The result is expressed in relative units.

The result speaks for itself, if the load time grows by about a third, which is still quite tolerable, then the time for performing operations in the database grows three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

The lack of RAM is the main reason why working with new 1C configurations is uncomfortable. Minimum suitable configurations should be considered with 2 GB of memory on board. At the same time, keep in mind that in our case "greenhouse" conditions were created: a clean system, only 1C and the task manager were launched. In real life, a browser, an office suite, an antivirus, etc., are usually open on a working computer, so proceed from the need for 500 MB per database plus some margin so that during heavy operations you do not run into a lack of memory and drastic performance degradation.

CPU

The central processing unit, without exaggeration, can be called the heart of the computer, since it is he who ultimately processes all the calculations. To evaluate its role, we ran another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, while the test was run twice with memory sizes of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected, a more powerful processor quite effectively took over the load in the face of a lack of resources, otherwise without giving any tangible benefits. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources, rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional I/O operations, etc.

findings

So, why does 1C slow down? First of all, this is a lack of RAM, the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the "two" worked fine, and the "three" shamelessly slows down.

The second place should be given to network performance, a slow 100 Mbps channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of work even on slow channels.

Then you should pay attention to the disk one, buying an SSD is unlikely to be a good investment, but replacing the disk with a more modern one will not be superfluous. The difference between generations of hard drives can be estimated from the following material: .

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance, unless this PC is used for heavy operations: batch processing, heavy reports, month closing, etc.

We hope this material will help you quickly understand the question of "why 1C slows down" and solve it most effectively and at no extra cost.

  • Tags:

Please enable JavaScript to view the

2. Feature of the program. Often, even with optimal settings, 1C works very slowly. The performance drops especially strongly when the number of simultaneously working with the database exceeds 4-5 users.

Who are you in the company?

The solution to the problem of slow 1C depends on who you are in the company. If you are a technical person - just read on. If you are a director or accountant, follow the special link ↓

Network bandwidth

As a rule, with one information base(IB) not one, but several users work. At the same time, data is constantly being exchanged between the computer on which the 1C client is installed and the computer on which the IB is located. The volume of these data is quite significant. Often a situation arises when a local network operating at a speed of 100 Mbps, and this is the most common speed, simply cannot cope with the load. And again, the user complains about the brakes in the program.

Each of these factors individually already significantly reduces the speed of the program, but the most unpleasant thing is that these things usually add up.

Now let's look at several solutions to the problem with the low speed of 1C and their cost, using an example local network out of 10 medium computers.

Solution one. Infrastructure Modernization

This is perhaps the most obvious solution. Let's calculate its minimum cost.

At a minimum, for each computer we need a 2 GB RAM bar, it costs, on average, 1500 rubles, LAN card with support for a speed of 1 Gb / s, costs about 700 rubles. In addition, you will need at least 1 router that supports a speed of 1 Gb / s, which will cost about 4000 rubles. In total, the cost is 26,000 rubles for equipment, excluding work.

In principle, the speed can increase significantly, however, now it will no longer be possible to buy inexpensive computers for the office. Besides, this decision not applicable for those who use Wi-Fi or want to work via the Internet - in their case, the network speed can be ten times lower. The thought arises: "Is it possible to implement the program entirely on one powerful server so that the user's computer does not participate in complex calculations, but simply serves to transfer the image?" Then you can work even on very weak computers, even in networks with low bandwidth. Naturally, such solutions exist.

Solution two. Terminal Server

Gained great popularity back in the days of 1C 7. Implemented on the server Windows versions and does an excellent job with our task. However, it has its pitfalls, namely, the cost of licenses.

Itself operating system will cost somewhere around 40,000 rubles. In addition to this, for everyone who plans to work in 1C, we also need a Windows Server CAL license, which costs about 1,700 rubles, and a Windows Remote Desktop Services CAL license, which costs about 5,900 rubles.

Having calculated the cost for a network of 10 computers, we will end up with 116,000 rubles. only for one license. Add to this the cost of the server itself (at least 40,000 rubles) and the cost of implementation work, however, even without this, the price of the licenses turned out to be impressive.

Decision three. Service 1C Enterprise

1C has developed its own solution to this problem, which can seriously increase the speed of the program. But here there is a nuance.

The fact is that the cost of such a solution ranges from 50,000 to 80,000 rubles, depending on the edition. For a company of up to 15 users, it turns out to be a little expensive. Great hopes were placed on the "1C enterprise mini-server", which, according to 1C, is aimed at small businesses and costs around 10,000 - 15,000 rubles.

However, when it went on sale, this product was a big disappointment. The fact is that the maximum number of users with which the mini-server could be used was only 5.

As one 1C programmer wrote on the forum: “It is still not clear why 1C chose exactly 5 connections! From 4 users, the problems are just starting, but here it all ends with five. If you want to connect the sixth one, pay another 50 thousand. They would make at least 10 connections ... "

Of course, the mini-server also found its consumer. However, for companies where more than 5 people work with 1C, a simple and inexpensive solution has not yet appeared.

In addition to the methods of accelerating the program described above, there is another one that is ideal for a segment of 5 - 15 users, namely, web access for 1C in file mode.

Decision four. Web access for 1C in file mode

The principle of operation is as follows: an additional role of a web server is raised on the computer, on which information security is published.

Naturally, it must be either the most powerful computer network, or a separate machine dedicated to this role. After that, you can work with 1C in web server mode. All heavy operations will be performed on the server side, and the traffic transmitted over the network will be minimized, as well as the load on the client computer.

Thus, even very weak machines can be used to work in 1C, and the network bandwidth becomes not critical. Our tests have shown that it is possible to work comfortably through Mobile Internet on a cheap tablet, while not experiencing discomfort.

This option is inferior to the 1C server of the enterprise in terms of speed, but this difference up to 15-20 users is practically not visually noticeable. By the way, you can use IIS (for Windows) and Apache (for Linux) to implement a web server, and both of these solutions are free!

Despite the obvious advantages, this way optimizing the work of 1C has not received much popularity.

I can't say for sure, but most likely, this is due to two reasons:

  • Pretty weak description in the technical documentation
  • Is at the crossroads of responsibility system administrator and 1C programmer

Usually, when a system administrator is contacted with a problem of low speed, he offers an infrastructure upgrade or a terminal server, if a 1C specialist is offered an enterprise 1C server. So, if in your company, the specialist responsible for the infrastructure and the specialist responsible for 1C work hand in hand, then you can safely use a solution based on a web server.

Let's speed up 1C. Remotely, quickly and without your participation

We know how to speed up 1Ski without disturbing the customer. We delve into the problem, do our job and leave. If you want the program to work just fine - contact us. We'll figure it out.

Leave a request - and get free consultation to speed up the program.

The main purpose of writing the article is not to repeat the obvious nuances to those administrators (and programmers) who have not yet gained experience with 1C.

A secondary goal, if I have any shortcomings, Infostart will point this out to me the fastest.

V. Gilev's test has already become a kind of "de facto" standard. The author on his website gave quite understandable recommendations, but I will simply give some results and comment on the most likely errors. Naturally, the test results on your equipment may differ, this is just a guideline, what should be and what you can strive for. I want to note right away that changes must be made step by step, and after each step, check what result it gave.

There are similar articles on Infostart, in the relevant sections I will put links to them (if I miss something, please tell me in the comments, I will add it). So, suppose you slow down 1C. How to diagnose the problem, and how to understand who is to blame, the administrator or the programmer?

Initial data:

Tested computer, main guinea pig: HP DL180G6, 2*Xeon 5650, 32 Gb, Intel 362i , Win 2008 r2. For comparison, comparable results in a single-threaded test are shown by the Core i3-2100. The equipment was specially taken not the newest, on modern equipment the results are noticeably better.

For testing remote 1C and SQL servers, SQL server: IBM System 3650 x4, 2*Xeon E5-2630, 32 Gb, Intel 350, Win 2008 r2.

To test the 10 Gbit network, Intel 520-DA2 adapters were used.

File version. (the base lies on the server in the shared folder, clients are connected on a network, the CIFS/SMB protocol). Step by step algorithm:

0. Add the Gilev test database to the file server in the same folder as the main databases. FROM client computer connect, run the test. We remember the result.

It is understood that even for old computers 10 years ago (Pentium on 775 socket ) the time from clicking on the 1C:Enterprise label until the database window appears should be less than a minute. ( Celeron = slow work).

If your computer is worse than a Pentium on 775 socket with 1 GB of RAM, then I sympathize with you, and comfortable work on 1C 8.2 in file version It will be hard for you to achieve. Think about either upgrading (it's long overdue), or switching to a terminal (or web, in case thin clients and managed forms) server.

If the computer is not worse, then you can kick the administrator. At a minimum, check the operation of the network, antivirus, and HASP protection driver.

If Gilev's test at this stage showed 30 "parrots" and more, but the 1C working base still works slowly - the questions are already for the programmer.

1. For a guideline, how much a client computer can "squeeze out", we check the operation of only this computer, without a network. We put the test base on local computer(to a very fast disk). If the client computer does not have a normal SSD, then a ramdisk is created. So far, the simplest and free one is Ramdisk enterprise.

To test version 8.2, 256 MB of a ramdisk is enough, and! The most important thing. After restarting the computer with a working ramdisk, it should have 100-200 MB free. Accordingly, without a ramdisk, for normal operation of free memory there should be 300-400 MB.

For testing version 8.3, a 256 MB ramdisk is enough, but more free RAM is needed.

When testing, you need to look at the processor load. In a case close to ideal (ramdisk), the local file 1c loads 1 processor core during operation. Accordingly, if during testing your processor core is not fully loaded, look for weaknesses. A little emotional, but generally correct, the influence of the processor on the operation of 1C is described. Just for reference, even on modern Core i3 with a high frequency, the numbers 70-80 are quite real.

The most common mistakes at this stage.

a) Incorrectly configured antivirus. There are many antiviruses, the settings for each are different, I can only say that with proper configuration, neither the web nor Kaspersky 1C interfere. With the "default" settings - about 3-5 parrots (10-15%) can be taken away.

b) Performance mode. For some reason, few people pay attention to this, and the effect is the most significant. If you need speed, then you must do it, both on client and server computers. (Gilev has a good description. The only caveat, on some motherboards If you turn off Intel SpeedStep, then you can not turn on TurboBoost).

In short, during 1C operation, there are a lot of waiting for a response from other devices (disk, network, etc.). While waiting for a response, if the performance mode is balanced, then the processor lowers its frequency. A response comes from the device, 1C (the processor) needs to work, but the first cycles go at a reduced frequency, then the frequency rises - and 1C again waits for a response from the device. And so - many hundreds of times per second.

You can (and preferably) enable performance mode in two places:

Through BIOS. Disable C1, C1E, Intel C-state (C2, C3, C4) modes. In different bios they are called differently, but the meaning is the same. Search for a long time, a reboot is required, but if you did it once, then you can forget. If everything is done correctly in the BIOS, then the speed will be added. On some motherboards, BIOS settings can be set so that the Windows performance mode will not play a role. (Examples BIOS settings at Gilev). These settings mainly concern server processors or "advanced" BIOSes, if you haven't found it yourself, and you don't have Xeon - it's okay.

Control Panel - Power - High performance. Minus - if the computer has not been serviced for a long time, it will buzz more strongly with a fan, it will heat up more and consume more energy. This is the price of performance.

How to check that the mode is enabled. Run Task Manager - Performance - Resource Monitor - CPU. We wait until the processor is busy with nothing.

These are the default settings.

BIOS C-state included,

balanced power mode


BIOS C-state included, high performance mode

For Pentium and Core, you can stop there,

you can still squeeze some "parrots" out of Xeon


BIOS C-state off, high performance mode.

If you do not use Turbo boost - this is how it should look

server tuned for performance


And now the numbers. Let me remind you: Intel Xeon 5650, ramdisk. In the first case, the test shows 23.26, in the latter - 49.5. The difference is almost twofold. The numbers may vary, but the ratio remains pretty much the same for the Intel Core.

Dear administrators, you can scold 1C as you like, but if end users need speed, you must enable high performance mode.

c) Turbo Boost. First you need to understand if your processor supports this function, for example. If it does, then you can still quite legally get some performance. (I don’t want to touch on the issues of overclocking, especially servers, do it at your own peril and risk. But I agree that increasing the Bus speed from 133 to 166 gives a very noticeable increase in both speed and heat dissipation)

How to turn on turbo boost is written, for example,. But! For 1C there are some nuances (not the most obvious). The difficulty is that the maximum effect of turbo boost is manifested when the C-state is turned on. And it turns out something like this picture:

Please note that the multiplier is the maximum, the Core speed is the most beautiful, the performance is high. But what will happen as a result of 1s?

Factor

Core speed (frequency), GHz

CPU-Z Single Thread

Gilev Ramdisk test

file version

Gilev Ramdisk test

client-server

without turbo boost

C-state off, turbo boost

53.19

40,32

C-state on, turbo boost

1080

53,13

23,04

But in the end, it turns out that according to CPU performance tests, the variant with a multiplier of 23 is ahead, according to Gilev's tests in the file version, the performance with a multiplier of 22 and 23 is the same, but in the client-server version, the variant with a multiplier of 23 horror horror horror (even if C -state set to level 7, it is still slower than with C-state turned off). Therefore, the recommendation, check both options for yourself, and choose the best one from them. In any case, the difference between 49.5 and 53 parrots is quite significant, especially since it is without much effort.

Conclusion - turbo boost must be included. Let me remind you that it is not enough to enable the Turbo boost item in the BIOS, you also need to look at other settings (BIOS: QPI L0s, L1 - disable, demand scrubbing - disable, Intel SpeedStep - enable, Turbo boost - enable. Control Panel - Power - High performance) . And I would still (even for the file version) stop at the option where c-state is turned off, even though the multiplier is less there. Get something like this...

A rather controversial point is the memory frequency. For example, the memory frequency is shown as very influential. My tests did not reveal such dependence. I will not compare DDR 2/3/4, I will show the results of changing the frequency within the same line. The memory is the same, but in the BIOS we force lower frequencies.




And test results. 1С 8.2.19.83, for file version local ramdisk, for client-server 1C and SQL on one computer, Shared memory. Turbo boost is disabled in both options. 8.3 shows comparable results.

The difference is within the measurement error. I specifically pulled out the CPU-Z screenshots to show that other parameters change with the frequency change, the same CAS Latency and RAS to CAS Delay, which levels out the frequency change. The difference will be when the memory modules physically change, from slower to faster, but even there the numbers are not very significant.

2. When we figured out the processor and memory of the client computer, we move on to the next very important place - the network. Many volumes of books have been written about network tuning, there are articles on Infostart (, and others), here I will not focus on this topic. Before starting testing 1C, please make sure that iperf between two computers shows the entire band (for 1 Gbit cards - well, at least 850 Mbit, but better 950-980), that Gilev's advice is followed. Then - the simplest test of work will be, oddly enough, copying one large file (5-10 gigabytes) over the network. An indirect sign of normal operation on a network of 1 Gbps will be an average copy speed of 100 Mb / s, good work - 120 Mb / s. I want to draw your attention to the fact that the processor load can also be a weak point (including). SMB the protocol on Linux is rather poorly parallelized, and during operation it can quite easily “eat” one processor core and not consume it anymore.

And further. With settings for default windows client works best with windows server(or even windows working station) and the SMB / CIFS protocol, the linux client (debian, ubuntu did not look at the rest) works better with linux and NFS (it also works with SMB, but parrots are higher on NFS). The fact that when linearly copying a win-linux server to nfs is copied into one stream faster, does not mean anything. Tuning debian for 1C is a topic for a separate article, I'm not ready for it yet, although I can say that in the file version I even got a little better performance than the Win version on the same equipment, but with postgres with users over 50 I still have everything very bad.

The most important thing , which is known to "burnt" administrators, but beginners do not take into account. There are many ways to set the path to the 1c database. You can do \\server\share, you can \\192.168.0.1\share, you can net use z: \\192.168.0.1\share (and in some cases this method will also work, but not always) and then specify the Z drive. It seems that all these paths point to the same place, but for 1C there is only one way that gives a fairly stable performance. So here's what you need to do right:

IN command line(or in policies, or whatever suits you) - do net use DriveLetter: \\server\share. Example: net use m:\\server\bases. I specifically emphasize NOT an IP address, namely name server. If the server is not visible by name, add it to dns on the server, or locally to the hosts file. But the appeal must be by name. Accordingly, on the way to the database, access this disk (see the picture).

And now I will show in numbers why such advice. Initial data: Intel X520-DA2, Intel 362, Intel 350, Realtek 8169 cards. OS Win 2008 R2, Win 7, Debian 8. Latest drivers, updates applied. Before testing, I made sure that Iperf gives a full bandwidth (except for 10 Gbit cards, it turned out to squeeze out only 7.2 Gbit, later I'll see why, the test server is not yet configured properly). The disks are different, but everywhere is an SSD (specially inserted a single disk for testing, nothing else is loaded) or a raid from an SSD. The speed of 100 Mbit was obtained by limiting the settings of the Intel 362 adapter. There was no difference between 1 Gbit copper Intel 350 and 1 Gbit optics Intel X520-DA2 (obtained by limiting the speed of the adapter). Maximum performance, turbo boost is off (just for comparability of results, turbo boost adds a little less than 10% for good results, for bad results it may not affect at all). Versions 1C 8.2.19.86, 8.3.6.2076. I do not give all the numbers, but only the most interesting ones, so that there is something to compare with.

Win 2008 - Win 2008

calling by ip address

Win 2008 - Win 2008

Address by name

Win 2008 - Win 2008

Calling by ip address

Win 2008 - Win 2008

Address by name

Win 2008 - Win 7

Address by name

Windows 2008 - Debian

Address by name

Win 2008 - Win 2008

Calling by ip address

Win 2008 - Win 2008

Address by name

11,20 26,18 15,20 43,86 40,65 37,04 16,23 44,64
1С 8.2 11,29 26,18 15,29 43,10 40,65 36,76 15,11 44,10
8.2.19.83 12,15 25,77 15,15 43,10 14,97 42,74
6,13 34,25 14,98 43,10 39,37 37,59 15,53 42,74
1C 8.3 6,61 33,33 15,58 43,86 40,00 37,88 16,23 42,74
8.3.6.2076 33,78 15,53 43,48 39,37 37,59 42,74

Conclusions (from the table, and from personal experience. Applies to the file version only):

Over the network, you can get quite normal numbers for work if this network is normally configured and the path is correctly written in 1C. Even the first Core i3s may well give 40+ parrots, which is quite good, and these are not only parrots, in real work the difference is also noticeable. But! the limitation when working with several (more than 10) users will no longer be the network, here 1 Gbit is still enough, but blocking during multi-user work (Gilev).

The 1C 8.3 platform is many times more demanding for competent network setup. Basic settings- see Gilev, but keep in mind that everything can influence. I saw acceleration from the fact that they uninstalled (and not just turned off) the antivirus, from removing protocols like FCoE, from changing drivers to an older, but microsoft certified version (especially for cheap cards like asus and dlinks), from removing the second network card from the server . A lot of options, configure the network thoughtfully. There may well be a situation when platform 8.2 gives acceptable numbers, and 8.3 - two or even more times less. Try to play around with platform versions 8.3, sometimes you get a very big effect.

1C 8.3.6.2076 (maybe later, I haven’t looked for the exact version yet) over the network is still easier to set up than 8.3.7.2008. From 8.3.7.2008 to achieve normal operation over the network (in comparable parrots) it turned out only a few times, I could not repeat it for a more general case. I didn’t understand much, but judging by the footcloths from Process Explorer, the recording does not go there the way it does in 8.3.6.

Despite the fact that when working on a 100Mbps network, its load schedule is small (we can say that the network is free), the speed of work is still much less than on 1 Gbps. The reason is network latency.

Ceteris paribus (well-functioning network) for 1C 8.2, the Intel-Realtek connection is 10% slower than Intel-Intel. But realtek-realtek can generally give sharp subsidence out of the blue. Therefore, if there is money, it is better to keep Intel network cards everywhere, if there is no money, then put Intel only on the server (your KO). Yes, and there are many times more instructions for tuning intel network cards.

Default antivirus settings (for example, drweb 10 version) take away about 8-10% of parrots. If you configure it properly (allow the 1cv8 process to do everything, although it is not safe) - the speed is the same as without antivirus.

Do NOT read Linux gurus. A server with samba is great and free, but if you put Win XP or Win7 on the server (or even better - server OS), then in the file version 1c will work faster. Yes, both samba and the protocol stack and network settings and much more in debian / ubuntu are well tuned, but this is recommended for specialists. It makes no sense to install Linux with default settings and then say that it is slow.

It's a good idea to test disks connected via net use with fio . At least it will be clear whether these are problems with the 1C platform, or with the network / disk.

For a single-user variant, I can’t think of tests (or a situation) where the difference between 1Gb and 10 Gb would be visible. The only place where 10Gbps for the file version gave better results was connecting disks via iSCSI, but this is a topic for a separate article. Still, I think that 1 Gbit cards are enough for the file version.

Why, with a 100 Mbit network, 8.3 works noticeably faster than 8.2 - I don’t understand, but the fact took place. All other equipment, all other settings are exactly the same, just in one case 8.2 is tested, and in the other - 8.3.

Not tuned NFS win - win or win-lin gives 6 parrots, did not include it in the table. After tuning, I received 25, but it is unstable (the run-up in measurements is more than 2 units). At the moment I can't recommend using windows and NFS protocol.

After all the settings and checks, we run the test again from the client computer, rejoice at the improved result (if it worked out). If the result has improved, there are more than 30 parrots (and especially more than 40), there are less than 10 users working at the same time, and the working database still slows down - almost definitely a programmer's problem (or you have already reached the peak of the file version's capabilities).

terminal server. (the base lies on the server, clients are connected on a network, the RDP protocol). Step by step algorithm:

0. Add the Gilev test database to the server in the same folder as the main databases. We connect from the same server and run the test. We remember the result.

1. In the same way as in the file version, we set up the work. In the case of a terminal server, the processor generally plays the main role (it is understood that there are no obvious weaknesses, such as lack of memory or a huge amount of unnecessary software).

2. Setting up network cards in the case of a terminal server has practically no effect on the operation of 1s. To provide "special" comfort, if your server gives out more than 50 parrots, you can play around with new versions of the RDP protocol, just for the comfort of users, faster response and scrolling.

3. With the active work of a large number of users (and here you can already try to connect 30 people to one base, if you try), it is very desirable to install an SSD drive. For some reason, it is believed that the disk does not particularly affect the operation of 1C, but all tests are carried out with the controller cache enabled for writing, which is wrong. The test base is small, it fits in the cache, hence the high numbers. On real (large) databases, everything will be completely different, so the cache is disabled for tests.

For example, I checked the work of the Gilev test with different disk options. I put discs from what was at hand, just to show a tendency. The difference between 8.3.6.2076 and 8.3.7.2008 is small (in the Ramdisk Turbo boost version 8.3.6 gives 56.18 and 8.3.7.2008 gives 55.56, in other tests the difference is even smaller). Power usage - maximum performance, turbo boost is disabled (unless otherwise noted).

Raid 10 4x SATA 7200

ATA ST31500341AS

Raid 10 4x SAS 10k

Raid 10 4x SAS 15k

Single SSD

ramdisk

Cache enabled

RAID controller

21,74 28,09 32,47 49,02 50,51 53,76 49,02
1С 8.2 21,65 28,57 32,05 48,54 49,02 53,19
8.2.19.83 21,65 28,41 31,45 48,54 49,50 53,19
33,33 42,74 45,05 51,55 52,08 55,56 51,55
1C 8.3 33,46 42,02 45,05 51,02 52,08 54,95
8.3.7.2008 35,46 43,01 44,64 51,55 52,08 56,18

The included cache of the RAID controller eliminates all the difference between the disks, the numbers are the same for both sat and sas. Testing with it for a small amount of data is useless and is not an indicator.

For the 8.2 platform, the performance difference between SATA and SSD options is more than double. This is not a typo. If you look at the performance monitor during the test on SATA drives. then there is clearly visible "Active disk time (in%)" 80-95. Yes, if you enable the write cache of the disks themselves, the speed will increase to 35, if you enable the raid controller cache - up to 49 (regardless of which disks are being tested at the moment). But these are synthetic parrots of the cache, in real work with large databases there will never be a 100% write cache hit ratio.

The speed of even cheap SSDs (I tested on Agility 3) is enough for the file version to work. The write resource is another matter, here you need to look in each specific case, it is clear that the Intel 3700 will have it an order of magnitude higher, but there the price is corresponding. And yes, I understand that when testing an SSD drive, I also test the cache of this drive to a greater extent, the real results will be less.

The most correct (from my point of view) solution would be to allocate 2 SSD disks to a mirror raid for the file base (or several file bases), and not put anything else there. Yes, with a mirror, SSDs wear out the same way, and this is a minus, but at least they are somehow insured against errors in the controller electronics.

The main advantages of SSD disks for the file version will appear when there are many databases, and each with several users. If there are 1-2 bases, and users in the region of 10, then SAS disks will be enough. (but in any case - look at the loading of these disks, at least through perfmon).

The main advantages of a terminal server are that it can have very weak clients, and the network settings affect the terminal server much less (your KO again).

Conclusions: if you run the Gilev test on the terminal server (from the same disk where the working databases are) and at those moments when the working database slows down, and the Gilev test shows a good result (above 30), then in the slow operation of the main working base It's most likely the programmer's fault.

If the Gilev test shows small numbers, and you have both a processor with a high frequency and fast disks, then here the administrator needs to take at least perfmon, and record all the results somewhere, and watch, observe, draw conclusions. There will be no definitive advice.

Client-server option.

Tests were carried out only on 8.2, tk. On 8.3, everything depends quite seriously on the version.

For testing, I chose different server options and networks between them to show the main trends.

SQL: Xeon E5-2630

SQL: Xeon E5-2630

Fiber channel-SSD

SQL: Xeon E5-2630

Fiber channel - SAS

SQL: Xeon E5-2630

Local SSD

SQL: Xeon E5-2630

Fiber channel-SSD

SQL: Xeon E5-2630

Local SSD

1C: Xeon 5650 =

1C: Xeon 5650 =

shared memory

1C: Xeon 5650 =

1C: Xeon 5650 =

1C: Xeon 5650 =

16,78 18,23 16,84 28,57 27,78 32,05 34,72 36,50 23,26 40,65 39.37
1С 8.2 17,12 17,06 14,53 29,41 28,41 31,45 34,97 36,23 23,81 40,32 39.06
16,72 16,89 13,44 29,76 28,57 32,05 34,97 36,23 23,26 40,32 39.06

It seems that I have considered all the interesting options, if you are interested in something else - write in the comments, I will try to do it.

SAS on storage is slower than local SSDs, even though storage has large cache sizes. SSDs and local and storage systems for the Gilev test work at comparable speeds. I don’t know any standard multi-threaded test (not only records, but all equipment) except for the load 1C from the MCC.

Changing the 1C server from 5520 to 5650 gave almost a doubling of performance. Yes, the server configurations do not match completely, but it shows a trend (nothing surprising).

Increasing the frequency on the SQL server certainly gives an effect, but not the same as on the 1C server, MS SQL server is perfectly able (if you ask it) to use multi-core and free memory.

Changing the network between 1C and SQL from 1 Gbps to 10 Gbps gives about 10% of parrots. Expected more.

Enabling Shared memory still gives the effect, although not 15%, as described. Make sure to do it, it's quick and easy. If someone gave the SQL server a named instance during installation, then for 1C to work, the server name must be specified not by FQDN (tcp / ip will work), not through localhost or just ServerName, but through ServerName\InstanceName, for example zz-test\zztest. (Otherwise, a DBMS error will occur: Microsoft SQL Server Native Client 10.0: Shared Memory Provider: The shared memory library used to connect to SQL Server 2000 was not found. HRESULT=80004005, HRESULT=80004005, HRESULT=80004005, SQLSrvr: SQLSTATE=08001, state=1, Severity=10, native=126, line=0).

For users less than 100, the only point of splitting into two separate servers is a license for Win 2008 Std (and older versions), which only supports 32 GB of RAM. In all other cases, 1C and SQL should definitely be installed on the same server and given more (at least 64 GB) memory. Giving MS SQL less than 24-28 GB of RAM is unjustified greed (if you think that you have enough memory for it and everything works fine - maybe the 1C file version would be enough for you?)

How much worse a bunch of 1C and SQL works in a virtual machine is the topic of a separate article (hint - noticeably worse). Even in Hyper-V, things are not so clear...

Balanced performance mode is bad. The results are in good agreement with the file version.

Many sources say that the debug mode (ragent.exe -debug) gives a strong decrease in performance. Well, it lowers, yes, but I would not call 2-3% a significant effect.

2022 wisemotors.com. How it works. Iron. Mining. Cryptocurrency.