Building a semantic core for SEO promotion. We compose a semantic core with our own hands: online and programs. What to consider when choosing keywords

The semantic core of a site is a list of queries for which you plan to promote the site in search engines Oh. Requests from the semantic core are grouped by site pages. Ready-made semantics for the site includes queries for each page of the site that will be promoted.

Basic rules for compiling a semantic core

  1. Only one page is promoted per request. It is not allowed for one request to correspond to two or more pages on the site - otherwise, search engines may choose to display the wrong page for the request.
  2. The page must respond to the user's query. For example, if the request includes the word “prices”, the prices for the product should be indicated on the promoted page. If you are promoting a page for the request “CASCO calculator”, the page should have a form for calculating the cost of CASCO.
  3. The semantic core should include high, medium and low frequency queries. It is necessary to find the maximum number of suitable queries, otherwise you will not get the full benefit from the promotion.
  4. When grouping requests In one group you need to include queries for which you can realistically promote one page. To do this, see if there are pages in the search results that are in the TOP 10 for the queries you selected. If there are no such pages, requests need to be separated into different groups.
  5. Check the influence of "Spectrum" in Yandex. It may turn out that for your topic “Spectrum” left not 10 places in the TOP, but only 1 or 2 - competition for them is intensifying. There are also queries for which it is necessary to display informational articles in the TOP, but a page with commercial information will not work.
  6. Attention, advertising and sorcerers! In competitive topics, the search results may contain a lot of Direct advertising and sorcerers, which shift the search results much lower and reduce the return on being in the TOP. An example of such a request: “buy air tickets” (see the screenshot below and try to find organic results on the screen).

How many queries should the semantic core include?

The maximum possible number - ideally, all queries that are in your topic and suitable for you (that is, your site can actually answer these queries).

As a rule, small and medium-sized sites have a semantic core of several hundred to thousands of queries. Large projects can be promoted based on tens and hundreds of thousands of requests.

For example, our blog about website promotion, the site receives hits from search engines for more than 2,000 different queries per month.

An example of a simple semantic core for an SEO blog

This example is educational and reflects the essence of the group, but is not the real core of any project.

How can you collect a semantic core?

  1. Copy from competitors. In this case, 2-3 competitor sites are selected and through special programs information is pumped out regarding what queries their sites are in the TOP 10. For example, you can get such information for free through the Seopult.ru service. As competitor sites, you can choose the most popular sites in the topic or the sites of companies whose product range is as close as possible to your project.
    Advantages of the method: saving time on creating a semantic core, relative simplicity and free of charge.
    Disadvantages of this method: a large number of “garbage” requests, the received data will need to be filtered and further processed, there is a risk of copying the mistakes of competitors. Queries that for some reason were not selected or found by competitors will not be lost in the semantics.
  2. Promote queries close to the TOP. Website promotion for queries whose positions are close to the TOP. This method Suitable only for old sites that were previously promoted. Through the systems from point 1, information is collected on which queries the project is in the TOP-30 and these queries are included in the semantic core.
    Advantages of the method: saving the customer’s time and budget. Faster return on promotion.
    Disadvantages of the method: This approach allows you to collect a minimum number of requests. In the future, the semantic core needs to be expanded. There is no guarantee that all requests received will be effective for the client’s business.
  3. Create a semantic core from scratch. Semantics is formed based on a deep analysis of queries that can be used to search for promoted goods, services or information.
    Advantages of the method: collecting the maximum number of requests for the most effective promotion.
    Disadvantages of the method: long and expensive.

Stages of compiling a semantic core for a website from scratch

  1. Project Analysis. As part of the analysis, it is necessary to compile a complete list of services, product categories or types of information presented on the client’s website. At this stage, the company's potential client is also analyzed. For example, if a company wants to sell products in the premium segment, there is no point in offering queries with the word “cheap” for promotion. It’s best to write everything down on a piece of paper; even better, create tables in Excel.
  2. Brainstorm. At this stage, the project team compiles a list of queries that, in the opinion of team members, can be used to search for each product, service or type of information on the client’s website. You can involve the client and third-party people not related to SEO in brainstorming and ask them questions about how they will search for this or that information on the Internet - what queries to ask, etc. People are very different and sometimes they look for information based on queries that no specialist can guess! It is useful to study the texts on the client’s and competitors’ websites - as a rule, they contain parts of search queries, different names of the same products - that is, in fact, all combinations of words and phrases by which they can be searched through search engines.
  3. Pumping search queries from other sources At the end of the article there will be links to the most useful programs to solve this problem):
    • Statistics of requests in Yandex and Google;
    • Search tips in search engines;
    • Statistics of transitions to the site from search engines (if the site has traffic);
    • Key queries from competitors;
    • Pastukhov's database contains about 800 million queries asked by search engine users. The database is constantly updated and supplemented. Paid.
  4. Filtering requests, removing duplicates and “empty” phrases. At this stage, lists of requests received from different sources are combined. Duplicates and “empty” requests are removed from this list. Phrases are considered such if when entering them in quotation marks in search engine statistics systems, a zero frequency is given. Learn more about determining request frequency.
  5. Grouping requests. At this stage, groups are identified from all requests, according to which individual sections and pages of the site will be promoted. If your site does not have suitable pages for promotion for certain groups of queries, such pages need to be created.
  6. Think again. Sometimes it is useful, after creating a semantic core, to rest for 1-2 days and return to this issue again - to look at all the collected information with a fresh look. Sometimes new ideas appear on how else people search for information through search engines - and it turns out to expand the semantic core.

Semantic core testing

When you have selected the semantic core for the site, it is advisable to test it. This can be done by launching a test advertising campaign in the system contextual advertising. This is expensive, but it will allow you to identify the most effective queries and possibly weed out queries that do not bring in big sales.

You can read more about semantic core testing in the article Five reasons to buy contextual advertising.

Development of site semantics

Having collected the semantic core once, you cannot leave it unchanged. New products appear, new requests appear, and old ones lose relevance. Therefore, at least once every six months to a year, it is necessary to update the semantic core and connect new queries to promotion, and exclude old ones that have lost their relevance from promotion.

In the comments, you can ask questions about how to create a semantic core - we will help and answer if possible.

Useful sites and services for selecting a semantic core:

  • Wordstat.yandex.ru – a tool for viewing query statistics in Yandex;
  • Rush-analytics.ru - the service allows you to collect large kernels based on Yandex.Wordstat data and collection of search tips in Yandex and Google. They give a nice bonus when registering in the system.
  • Topvisor.ru - a service that allows you to automatically group queries from the semantic core. You can set the grouping precision, which affects the number of requests in one group.
  • Advse.ru is a tool that allows you to see what queries competitors are displaying contextual advertising for (you can promote for the same queries)
  • Pastukhov's database is a huge database of queries for the Yandex search engine; at the time of writing, it consisted of 800 million queries.
  • Seopult.ru is a tool that allows you to see the positions of your website or competitors in search results for free. To view positions, you need to register in the system, create a project and reach the selection stage keywords.

Let's write a simple kernel that can be booted using the GRUB bootloader on an x86 system. This kernel will display a message on the screen and wait.

How does an x86 system boot?

Before we start writing the kernel, let's understand how the system boots and transfers control to the kernel.

Most processor registers already contain certain values ​​at startup. The register pointing to the address of instructions (Instruction Pointer, EIP) stores the memory address where the instruction executed by the processor lies. The default EIP is 0xFFFFFFFF0. Thus, x86 processors at the hardware level start working at address 0xFFFFFFF0. This is actually the last 16 bytes of the 32-bit address space. This address is called the reset vector.

Now the chipset memory map ensures that 0xFFFFFFF0 belongs to a specific part of the BIOS, not RAM. At this time, the BIOS copies itself to RAM for faster access. Address 0xFFFFFFF0 will only contain an instruction to jump to the address in memory where a copy of the BIOS is stored.

This is how the BIOS code begins to execute. The BIOS first looks for a device that can boot from, in a preset order. A magic number is sought to determine whether the device is bootable (the 511th and 512th bytes of the first sector must be equal to 0xAA55).

When the BIOS finds a boot device, it copies the contents of the first sector of the device into RAM, starting at the physical address 0x7c00; then goes to the address and executes the downloaded code. This code is called bootloader.

The bootloader loads the kernel at a physical address 0x100000. This address is used as the starting address in all large kernels on x86 systems.

All x86 processors start out in a simple 16-bit mode called real mode. GRUB bootloader switches mode to 32-bit protected mode, setting the low bit of register CR0 to 1 . Thus, the kernel is loaded in 32-bit protected mode.

Note that in the case of Linux kernel GRUB sees Linux boot protocols and loads the kernel in real mode. The kernel automatically switches to protected mode.

What do we need?

  • x86 computer;
  • Linux;
  • ld (GNU Linker);

Setting the entry point in assembler

No matter how much you would like to limit yourself to C alone, you will have to write something in assembler. We will write a small file on it that will serve as the starting point for our kernel. All it will do is call an external function written in C and stop the program flow.

How can we make sure that this code is the starting point?

We will use a linker script that links object files to create the final executable file. In this script we will explicitly indicate that we want to load data at address 0x100000.

Here is the assembler code:

;;kernel.asm bits 32 ;nasm directive - 32 bit section .text global start extern kmain ;kmain is defined in the c file start: cli ;block interrupts mov esp, stack_space ;set stack pointer call kmain hlt ;halt the CPU section .bss resb 8192 ;8KB for stack stack_space:

The first instruction, bits 32, is not an x86 assembly instruction. This is a directive to the NASM assembler that specifies code generation for a processor operating in 32-bit mode. In our case this is not necessary, but generally useful.

The section with the code begins on the second line.

global is another NASM directive that makes symbols source code global. This way the linker knows where the start symbol is - our entry point.

kmain is a function that will be defined in the kernel.c file. extern means that the function is declared somewhere else.

Then comes the start function, which calls the kmain function and stops the processor with the hlt instruction. This is why we disable interrupts in advance using the cli instruction.

Ideally, we need to allocate some memory and point to it with a stack pointer (esp). However, it looks like GRUB has already done this for us. However, you will still allocate some space in the BSS section and move the stack pointer to its beginning. We use the resb instruction, which reserves the specified number of bytes. Immediately before calling kmain, the stack pointer (esp) is set to the correct location with the mov instruction.

Kernel in C

In kernel.asm we made a call to the kmain() function. Thus, our “C” code should start execution with kmain() :

/* * kernel.c */ void kmain(void) ( const char *str = "my first kernel"; char *vidptr = (char*)0xb8000; //video mem begins here. unsigned int i = 0; unsigned int j = 0; /* this loops clears the screen * there are 25 lines each of 80 columns; each element takes 2 bytes */ while(j< 80 * 25 * 2) { /* blank character */ vidptr[j] = " "; /* attribute-byte - light grey on black screen */ vidptr = 0x07; j = j + 2; } j = 0; /* this loop writes the string to video memory */ while(str[j] != "\0") { /* the character"s ascii */ vidptr[i] = str[j]; /* attribute-byte: give character black bg and light grey fg */ vidptr = 0x07; ++j; i = i + 2; } return; }

All our kernel will do is clear the screen and display the line “my first kernel”.

First we create a vidptr pointer that points to the address 0xb8000. In protected mode, “video memory” begins from this address. To display text on the screen, we reserve 25 lines of 80 ASCII characters, starting at 0xb8000.

Each character is displayed not by the usual 8 bits, but by 16. The first byte stores the character itself, and the second - attribute-byte . It describes the formatting of the character, such as its color.

To display the green character s on a black background, we will write this character in the first byte and the value 0x02 in the second. 0 means black background, 2 means green text color.

Here is the color chart:

0 - Black, 1 - Blue, 2 - Green, 3 - Cyan, 4 - Red, 5 - Magenta, 6 - Brown, 7 - Light Grey, 8 - Dark Grey, 9 - Light Blue, 10/a - Light Green, 11/b - Light Cyan, 12/c - Light Red, 13/d - Light Magenta, 14/e - Light Brown, 15/f - White.

In our kernel we will use light gray text on a black background, so our attribute byte will have the value 0x07.

In the first loop, the program prints a blank symbol over the entire 80x25 zone. This will clear the screen. In the next cycle, characters from the null-terminated string “my first kernel” with an attribute byte equal to 0x07 are written to “video memory”. This will print the string to the screen.

Connecting part

We need to assemble kernel.asm into an object file using NASM; then use GCC to compile kernel.c into another object file. They then need to be attached to the executable boot kernel.

To do this, we will use a binding script that is passed to ld as an argument.

/* * link.ld */ OUTPUT_FORMAT(elf32-i386) ENTRY(start) SECTIONS ( . = 0x100000; .text: ( *(.text) ) .data: ( *(.data) ) .bss: ( *( .bss) ) )

First we will ask output format as 32-bit Executable and Linkable Format (ELF). ELF is a standard binary file format for Unix x86 systems. ENTRY takes one argument specifying the name of the symbol that is the entry point. SECTIONS- this is the most important part. It defines the markup of our executable file. We determine how the different sections should be connected and where to place them.

In parentheses after SECTIONS, the dot (.) displays the position counter, which defaults to 0x0. It can be changed, which is what we are doing.

Let's look at the following line: .text: ( *(.text) ) . An asterisk (*) is special character, matching any file name. The expression *(.text) means all .text sections from all input files.

Thus, the linker joins all the code sections of the object files into one section of the executable file at the address in the position counter (0x100000). After this, the counter value will be equal to 0x100000 + the size of the resulting section.

The same thing happens with other sections.

Grub and Multiboot

Now all the files are ready to create the kernel. But there is one more step left.

There is a standard for loading x86 cores using a bootloader called Multiboot specification. GRUB will only boot our kernel if it meets these specifications.

Following them, the kernel should contain a header in its first 8 kilobytes. Additionally, this header must contain 3 fields, which are 4 bytes:

  • magical field: contains magic number 0x1BADB002 to identify the core.
  • field flags: we don’t need it, let’s set it to zero.
  • field checksum: if you add it with the previous two, you should get zero.

Our kernel.asm will look like this:

;;kernel.asm ;nasm directive - 32 bit bits 32 section .text ;multiboot spec align 4 dd 0x1BADB002 ;magic dd 0x00 ;flags dd - (0x1BADB002 + 0x00) ;checksum. m+f+c should be zero global start extern kmain ;kmain is defined in the c file start: cli ;block interrupts mov esp, stack_space ;set stack pointer call kmain hlt ;halt the CPU section .bss resb 8192 ;8KB for stack stack_space:

Building the core

Now we will create object files from kernel.asm and kernel.c and link them using our script.

Nasm -f elf32 kernel.asm -o kasm.o

This line will run the assembler to create the kasm.o object file in ELF-32 format.

Gcc -m32 -c kernel.c -o kc.o

The “-c” option ensures that no hidden linking occurs after compilation.

Ld -m elf_i386 -T link.ld -o kernel kasm.o kc.o

This will run the linker with our script and create executable file called kernel.

Setting up grub and starting the kernel

GRUB requires the kernel name to satisfy the pattern kernel- . So rename the kernel. I named mine kernel-701.

Now put it in the directory /boot. To do this you will need superuser rights.

In the GRUB configuration file grub.cfg, add the following:

Title myKernel root (hd0,0) kernel /boot/kernel-701 ro

Don't forget to remove the hiddenmenu directive if present.

Restart your computer and you will see a list of kernels including yours. Select it and you will see:

This is your core! Let's add an input/output system.

P.S.

  • For any kernel tricks, it is better to use a virtual machine.
  • To run the kernel in grub2 the config should look like this: menuentry "kernel 7001" ( set root="hd0,msdos1" multiboot /boot/kernel-7001 ro )
  • if you want to use the qemu emulator, use: qemu-system-i386 -kernel kernel

Hi all! Today's article is devoted to how to correctly assemble a semantic core (SC). If you are engaged in SEO promotion in Google and Yandex, want to increase natural traffic, increase website traffic and sales - this material is for you.

To get to the bottom of the truth, we will study the topic from “A to Z”:

In conclusion we will consider general rules for compiling the SY. So let's get started!

Semantic core: what is it and what are the queries?

The semantic core of a site (also known as the “semantic core”) is a set of words and phrases that exactly corresponds to the structure and theme of the resource. Simply put, these are the queries by which users can find a site on the Internet.

It is the correct semantic core that gives search engines and the audience a complete picture of the information presented on the resource.

For example, if a company sells ready-made postcards, then the semantic core should include the following queries: “buy a postcard”, “postcard price”, “custom postcard” and the like. But not: “how to make a postcard”, “do-it-yourself postcard”, “homemade postcards”.

Interesting to know: LSI copywriting. Will the technique replace SEO?

Classification of requests by frequency:

  • High frequency queries(HF) - the most often “hammered” into the search bar (for example, “buy a postcard”).
  • Midrange(MF) – less popular than HF keys, but also of interest to a wide audience (“buy postcard price”).
  • Low frequency(NP) – phrases that are requested very rarely (“buy an art postcard”).

It is important to note that there are no clear boundaries separating HF from SY and LF, since they vary depending on the topic. For example, for the query “origami”, the RF indicator is 600 thousand impressions per month, and for “cosmetics” – 3.5 million.

If we turn to the anatomy of the key, then the high frequency consists only of the body, the midrange and low frequencies are supplemented by a specifier and a “tail”.

When forming a semantic core, you need to use all types of frequency, but in different proportions: minimum HF, maximum LF and average amount of MF.

To make it clearer, let's draw an analogy with a tree. The trunk is the most important request on which everything rests. Thick branches located closer to the trunk are mid-frequency keys, which are also popular, but not as popular as HF. Thin branches are low-frequency words that are also used to search for the desired product/service, but rarely.

Separation of keys by competitiveness:

  • highly competitive (HC);
  • average competitive (SC);
  • low-competitive (NC).

This criterion shows how many web resources this request uses for promotion. Everything is simple here: the higher the competitiveness of the key, the more difficult it is to break through and stay in the top 10 with it. Low-competitive ones are also not worth attention, since they are not very popular on the network. The ideal option is to advance according to IC requests, with which you can realistically take first place in a stable business area.

Classification of requests according to user needs:

  • Transactional– keys associated with the action (buy, sell, upload, download).
  • Information– to obtain any information (what, how, why, how much).
  • Navigational– help you find information on a specific resource (“buy a telephone socket”).

The remaining keywords, when it is difficult to understand the user’s intention, are classified into the “Other” group (for example, just the word “postcard” raises a lot of questions: “Buy? Make? Draw?”).

Why does a website need a semantic core?

Collecting a semantic core is painstaking work that requires a lot of time, effort and patience. It will not be possible to create a correct syntax that will work in just two minutes.

A completely reasonable question arises here: is it even worth spending effort on selecting a semantic core for a site? If you want your Internet project to be popular, constantly increase your customer base and, accordingly, increase the company’s profits, the answer is unequivocal: “YES”.

Because collecting the semantic core helps:

  • Increase the visibility of a web resource. Search engines Yandex, Google and others will find your site using the keywords you select and offer it to users who are interested in these queries. As a result, the influx of potential customers increases, and the chances of selling a product/service increase.
  • Avoid competitors' mistakes. When creating a syntax, an analysis of the semantic core of competitors occupying the first position in search results is necessarily performed. By studying the leading sites, you will be able to determine what queries help them stay in the top, what topics they write texts on, and what ideas are unsuccessful. During your competitor analysis, you may also come up with ideas on how to develop your business.
  • Make the site structure. It is recommended to use the semantic core as an “assistant” for creating a website structure. By collecting the complete CN, you can see all the queries that users enter when searching for your product or service. This will help you decide on the main sections of the resource. Most likely, you will need to create pages that you didn’t even think about initially. It is important to understand that the NL only suggests the interests of users. Ideally, the site structure matches the business area and contains content that meets the needs of the audience.
  • Avoid spam. After analyzing the semantic core of top competitor sites, you can determine the optimal keyword frequency. Because there is no universal indicator of query density for all pages of a resource, and everything depends on the topic and type of page, as well as the language and the key itself.

How else can you use the semantic core? To create the right content plan. Properly collected keys will suggest topics for texts and posts that are of interest to your target audience.

Conclusion. It is almost IMPOSSIBLE to create an interesting, popular and profitable Internet project without SY.

Material on topic:

Preparing to collect the semantic core for the site

Before creating the semantic core of the site, you need to perform the following steps:

I. Study the company’s activities (“brainstorming”)

Here it is important to write down ALL the services and goods that the organization offers. For example, to collect a semantic core for an online furniture store, you can use the following queries: sofa, armchair, bed, hallway, cabinet + restoration, repair. The main thing here is not to miss anything and not to add unnecessary things. Only relevant information, i.e. If the company does not sell poufs or repair furniture, these requests are not necessary.

In addition to brainstorming, you can use Google services Analytics and Yandex.Metrika (Fig. 1) or personal accounts in Google Search Console and Yandex Webmaster (Fig. 2). They will tell you which queries are most popular among your target audience. Such assistance is available only to already operating sites.

Texts to help:

  • Advego– works on the same principle as Istio.com.

  • Simple SEO Toolsfree service for SEO analysis of the site, including the semantic core.

  • Lenartools. It works simply: load the pages from which you need to “pull” keys (max 200), click “Let’s go” - and you get a list of words that are most often used on resources.

II. To analyze the semantic core of a competitor site:

  • SEMRUSH– you need to add the resource address, select the country, click “Start Now” and get the analysis. The service is paid, but 10 free checks are provided upon registration. Also suitable for collecting keys for your own business project.

  • Searchmetrics– a very convenient tool, but it’s paid and English language, so it is not available to everyone.

  • SpyWords– a service for analyzing a competitor’s activities: budget, search traffic, ads, requests. A “reduced” set of functions is available for free, and for a fee you can get a detailed picture of the progress of the company you are interested in.

  • Serpstat– a multifunctional platform that provides a report on keywords, rankings, competitors in Google and Yandex search results, backlinks, etc. Suitable for selecting keywords and analyzing your resource. The only negative is that the full range of services is available after paying for the tariff plan.

  • PR-CYfree program to analyze the semantic core, usability, mobile optimization, link mass and much more.

Another effective way to expand the semantic core is to use synonyms. Users can search for the same product or service in different ways, so it is important to include all alternative keys in the TL. Hints in Google and Yandex will help you find synonyms.

Advice. If the site is informational, you first need to select queries that are the main ones for this resource and for which promotion is planned. And then - seasonal. For example, for a web project about fashion trends in clothing, the key queries will be: fashion, women's, men's, children's. And, so to speak, “seasonal” - autumn, winter, spring, etc.

How to assemble a semantic core: detailed instructions

Having decided on a list of queries for your site, you can begin collecting the semantic core.

It can be done:

I. FREE using:

Wordstat Yandex

Yandex Wordstat is a very popular online service with which you can:

  • collect the semantic core of the site with statistics for the month;
  • get words similar to the query;
  • filter keywords entered from mobile devices;
  • find out statistics by city and region;
  • determine seasonal fluctuations of keys.

Big drawback: you have to “unload” the keys manually. But if you install the extension Yandex Wordstat Assistant, working with the semantic core will speed up significantly (relevant for the Opera browser).

It’s easy to use: click on “+” next to the desired key or click “add all”. Requests are automatically transferred to the extension list. After collecting the CN, you need to transfer it to the table editor and process it. Important advantages of the program: checking for duplicates, sorting (alphabet, frequency, adding), the ability to add keys manually.

Step-by-step instructions on how to use the service are given in the article: Yandex. Wordstat: how to collect key queries?

Google Ads

Keyword planner from Google, which allows you to select a semantic core online for free. The service finds keywords based on search engine user requests Google systems. To work, you must have a Google account.

The service offers:

  • find new keywords;
  • see the number of requests and forecasts.

To collect the semantic core, you need to enter a query, selecting the location and language. The program shows the average number of requests per month and the level of competition. There is also information about ad impressions and the bid to display an ad at the top of the page.

If necessary, you can set a filter by competition, average position and other criteria.

It is also possible to request a report ( step by step instructions The program shows how to do it).

To study traffic forecasting, just enter a query or a set of keys in the “See the number of queries and forecasts” window. The information will help determine the effectiveness of the strategic plan for a given budget and rate.

The “disadvantages” of the service include the following: there is no exact frequency (only the average for the month); does not show encrypted Yandex keys and hides some from Google. But it determines competition and allows you to export keywords in Excel format.

SlovoEB

This is a free version of Key Collector, which has a lot of useful features:

  • quickly collects a semantic core from the right and left columns of WordStat;
  • performs batch collection of search tips;
  • determines all types of frequency;
  • collects seasonality data;
  • allows you to perform batch collection of words and frequency from Rambler.Adstat;
  • Calculates KEI (Key Effectiveness Index).

To use the service, just enter your account information in Direct (login and password).

If you want to know more, read the article: Slovoeb (Slovoeb). Basics and instructions for use

Bukvariks

An easy-to-use and free program for collecting the semantic core, the database of which includes more than 2 billion queries.

Is different operational work, as well as useful features:

  • supports a large list of exception words (up to 10 thousand);
  • allows you to create and use lists of words directly when forming a sample;
  • offers to compile lists of words by multiplying several lists (Combinator);
  • removes duplicate keywords;
  • shows frequency (but only “worldwide”, without selecting a region);
  • analyzes domains (one or more, comparing SYNAL resources);
  • exported in .csv format.

The only important drawback for installation program– large “weight” (in the archived format ≈ 28 GB, in the unpacked format ≈ 100 GB). But there is an alternative - selecting SYS online.

II. PAID using programs:

Base of Maxim Pastukhov

A Russian service that contains a database of more than 1.6 billion keywords with Yandex WordStat and Direct data, as well as an English service containing more than 600 million words. It works online and helps not only in creating a semantic core, but also in launching an advertising campaign in Yandex.Direct. Its most important and important disadvantage can be safely called its high cost.

Key Collector

Perhaps the most popular and convenient tool for collecting the semantic core.

Key Collector:

  • collects keywords from the right and left columns of WordStat Yandex;
  • filters out unnecessary requests using the Stop Words option;
  • searches for duplicates and identifies seasonal keywords;
  • filters keys by frequency;
  • uploaded in Excel table format;
  • finds pages relevant to the request;
  • collects statistics from: Google Analytics, AdWords, etc.

You can evaluate how Kay Collector collects the semantic core for free in the demo version.

Rush Analytics

A service with which you can collect and cluster the semantic core.

In addition, Rush Analytics:

  • looks for hints in Youtube, Yandex and Google;
  • offers a convenient stop word filter;
  • checks indexing;
  • determines frequency;
  • checks site positions for desktops and mobiles;
  • generates technical specifications for texts, etc.

An excellent tool, but paid: no demo version and limited free checks.

Mutagen

The program collects key queries from the first 30 sites in the Yandex search engine. Shows the frequency per month, the competitiveness of each search query and recommends using words with an indicator of up to 5 (since high-quality content is enough to effectively promote such keywords).

Useful article: 8 types of texts for a website - write correctly

A paid program for collecting the semantic core, but there is a free limit - 10 checks per day (available after the first replenishment of the budget, at least by 1 ruble). Open only to registered users.

Keyword Tool

A reliable service for creating a semantic core that:

  • V free version – collects more than 750 keys for each request, using hints from Google, Youtube Bing, Amazon, eBay, App Store, Instagram;
  • in paid– shows the frequency of requests, competition, cost in AdWords and dynamics.

The program does not require registration.

In addition to the presented tools, there are many other services for collecting the semantic core of a site with detailed video reviews and examples. I settled on these because I think they are the most effective, simple and convenient.

Conclusion. If possible, it is advisable to purchase licenses to use paid programs, since their functionality is much wider than that of free analogues. But for simple collection of CN, “open” services are also quite suitable.

Clustering of the semantic core

A ready-made semantic core, as a rule, includes many keywords (for example, for the request “upholstered furniture,” services return several thousand words). What to do next with such a huge number of keywords?

The collected keys are needed:

I. Clear away “garbage”, duplicates and “dummies”

Requests with zero frequency or errors are simply deleted. To eliminate keys with unnecessary “tails,” I recommend using Excel function"Sorting and Filtering". What can be considered garbage? For example, for a commercial site, words such as “download”, “free”, etc. will be superfluous. Duplicates can also be automatically removed in Excel using the “remove duplicates” option (see examples below).

We remove keys with zero frequency:

Removing unnecessary “tails”:

Getting rid of duplicates:

II. Remove highly competitive queries

If you don’t want the “path” to the top to last for years, exclude VK keys. With such keywords, it will not be enough to just get to the first positions in the search results, but what is more important and more difficult is to try to stay there.

An example of how to determine VK-keys through the keyword planner from Google (you can leave only NK and SK through the filter):

III. Perform ungrouping of the semantic core

You can do this in two ways:

1. PAID:

  • KeyAssort– a semantic core clusterer that helps create a site structure and find niche leaders. Powered by search engines Yandex and Google. Performs ungrouping of 10 thousand requests in just a couple of minutes. You can evaluate the benefits of the service by downloading the demo version.

  • SEMparser performs automatic grouping of keys; creating a site structure; identification of leaders; generation of technical specifications for copywriters; Yandex backlight parsing; determining the geodependence and “commerciality” of queries, as well as the relevance of pages. In addition, the service checks how well the text matches the top according to SEO parameters. How it works: collect SYNOPSIS and save it in .xls or .xlsx format. You create a new project on the service, select a region, upload a file with queries - and after a few seconds you receive words sorted into semantic groups.

In addition to these services, I can also recommend Rush Analytics, whom we have already met above, and Just-Magic.

Rush Analytics:

Just-Magic:

2. FREE:

  • Manually- With using Excel and the Sort and Filter functions. To do this: set a filter, enter a query for the group (for example, “buy”, “price”), highlight the list of keys in color. Next, set up the “Custom sorting” option (in “Sorting by color”) by going to “sort within the specified range.” The final touch is to add names to the groups.

Step 1

Step 2

Step 3

Step 4

An example of an ungrouped semantic core:

  • SEOQUICKfree online program for automatic clustering of the semantic core. To “scatter” keys into groups, just download a file with requests or add them manually and wait a minute. The tool works quickly, determining the frequency and type of key. Allows you to delete unnecessary groups and export the document in Excel format.

  • Keyword Assistant. The service works online on the principle of an Excel table, i.e. you will have to distribute the keywords manually, but it takes much less time than working in Excel.

How to cluster the semantic core and what methods to use is up to you. I believe that the way you need it can only be done manually. It's long, but effective.

After collecting and distributing the semantic core into sections, you can begin writing texts for the pages.

Read a related article with examples: How to correctly enter keywords into the text?

General rules for creating FL

To summarize, it is important to add tips that will help you assemble the correct semantic core:

The marketing statement should be designed so that it meets the needs of as many potential clients as possible.

The semantics must exactly correspond to the theme of the web project, i.e. You should focus only on targeted queries.

It is important that the finished semantic core includes only a few high-frequency keys, the rest is filled with mid- and low-frequency ones.

The semantic core should be regularly expanded to increase natural traffic.

And the most important thing: everything on the site (from keys to structure) must be done “for people”!

Conclusion. A well-assembled semantic core gives a real chance to quickly promote and maintain a site in top positions in search results.

If you doubt that you can assemble the correct semantic language, it is better to order a semantic core for the site from professionals. This will save energy, time and bring more benefits.

It will also be interesting to know: How to place and speed up the indexing of an article? 5 secrets of success

That's all. I hope the material will be useful to you in your work. I would be grateful if you share your experience and leave comments. Thank you for your attention! Until new online meetings!

In addition to the site, it is important to know how to use it correctly with maximum benefit for internal and external site optimization.

A lot of articles have already been written on the topic of how to create a semantic core, therefore, within the framework of this article, I want to draw your attention to some features and details that will help you use the semantic core correctly, and thereby help promote an online store or website . But first, I’ll briefly give my definition of the semantic core of the site.

What is the semantic core of a site?

The semantic core of a site is a list, set, array, or collection of keywords and phrases that are requested by users (your potential visitors) in search engine browsers to find the information of interest.

Why does a webmaster need to create a semantic core?

Based on the definition of the semantic core, there are a lot of obvious answers to this question.

It is important for online store owners to know how potential buyers are trying to find the product or service that the online store owner wants to sell or provide. The position of the online store in search results directly depends on this understanding. The more consistent the content of an online store is with consumer search queries, the closer the online store is to the TOP of search results. This means that the conversion of visitors into buyers will be higher and of better quality.

For bloggers who are actively involved in monetizing their blogs (moneymaking), it is also important to be in the TOP of search results on topics relevant to the content of the blog. Increase search traffic brings more profit from impressions and clicks from contextual advertising on the site, impressions and clicks on advertising units of affiliate programs and increased profit from other types of earnings.

The more original and useful content a site has, the closer the site is to the TOP. Life exists primarily on the first page of search engine results. Therefore, knowing how to create a semantic core is necessary for SEO of any website or online store.

Sometimes webmasters and online store owners wonder where to get high-quality and relevant content? The answer comes from the question - you need to create content in accordance with key user requests. The more search engines consider your site's content to be relevant to users' keywords, the better for you. By the way, this is where the answer to the question arises - where to get a variety of content topics? It's simple - analyzing search queries users, you can find out what they are interested in and in what form. Thus, having created the semantic core of the site, you can write a series of articles and/or descriptions for the products of the online store, optimizing each page for a specific keyword (search query).

For example, I decided to optimize this article for the key query “how to make a semantic core”, because competition in this request lower than for the query “how to create a semantic core” or “how to create a semantic core”. Thus, it is much easier for me to get to the TOP of search engine results for this query using absolutely free promotion methods.

How to create a semantic core, where to start?

There are a number of popular online services for compiling a semantic core.

The most popular service, in my opinion, is Yandex keyword statistics - http://wordstat.yandex.ru/

Using this service, you can collect the vast majority of search queries in various word forms and combinations for any topic. For example, in the left column we see statistics on the number of requests not only for the key phrase “semantic core”, but also statistics on various combinations of this key phrase in different conjugations and with diluting and additional words. In the left column we see statistics of search phrases that were searched together with the key phrase “semantic core”. This information can be valuable, at least as a source of topics for creating new content relevant to your site. I also want to mention one feature of this service - you can specify the region. Thanks to this option, you can more accurately find out the number and nature of the search queries you need for the desired region.


Another service for compiling a semantic core is statistics of Rambler search queries - http://adstat.rambler.ru/


In my subjective opinion, this service can be used when there is a battle to attract every single user to your site. Here you can clarify some low-frequency and long tail queries; user requests for them range from approximately 1 to 5-10 per month, i.e. very little. I’ll immediately make a reservation that in the future we will consider the topic of classification of keywords and the features of each group from the point of view of their application. Therefore, I personally rarely use these statistics, usually in cases where I am working on a highly specialized site.

To form the semantic core of the site, you can also use the hints that appear when entering a search query in the search engine’s browser.



And another option for residents of Ukraine to add to the list of keywords for the semantic core of the site is to view site statistics on - http://top.bigmir.net/


Having selected the desired section topic, we look for open statistics of the most visited and relevant site


As you can see, the statistics of interest may not always be open; as a rule, webmasters hide them. However, it can also work as an additional source of keywords.

By the way, a wonderful article by Globator (Mikhail Shakin) will teach you how to beautifully arrange the entire list of keywords in tabular form in Excel - http://shakin.ru/seo/keyword-suggestion.html There you can also read about how to make semantic core for English-language projects.

What to do next with the list of keywords?

First of all, to create a semantic core, I recommend structuring the list of keywords - dividing it into conditional groups: high-frequency (HF), mid-frequency (MF) and low-frequency (LF) keywords. It is important that these groups include keywords that are very similar in morphology and topic. The most convenient way to do this is in the form of a table. I do it something like this:


The top row of the table is high frequency (HF) search queries (written in red). I put them at the head of thematic columns, in each cell of which I sorted mid-frequency (MF) and low-frequency (LF) search queries as homogeneously as possible by topic. Those. I linked the most suitable groups of midrange and low frequency queries to each HF request. Each cell of mid and low frequency queries is a future article, which I write and optimize strictly for the set of keywords in the cell. Ideally, one article should correspond to one keyword (search query), but this is a very routine and time-consuming job, because there can be thousands of such keywords! Therefore, if there are a lot of them, you need to highlight the most significant ones for yourself, and weed out the rest. Also, you can optimize your future article for 2 - 4 mid and low frequency keywords.

For online stores, mid and low frequency search queries are usually product names. Therefore, there are no particular difficulties in internal optimization of each page of an online store, it’s just a long process. The longer it is, the more products there are in the online store.

I have highlighted in green those cells for which I already have articles ready, i.e. I won’t get confused with the list of finished articles in the future.

I will tell you how to optimize and write articles for a website in one of the future articles.

So, having made such a table, you can have a very clear idea of ​​how you can create the semantic core of a site.

As a result of this article, I want to say that here we have to some extent touched on the details of the issue of promoting an online store. Surely, after reading some of my articles about promoting an online store, you came up with the idea that compiling the semantic core of a site and internal optimization are interrelated and interdependent activities. I hope that I was able to argue for the importance and priority of the issue - how to make a semantic core site.



2024 wisemotors.ru. How it works. Iron. Mining. Cryptocurrency.