K Nearest Neighbours is a basic classification algorithm. The idea comes probably from the extension of Rote classifier, which is as simple as point system in ‘Whose line is it anyway’. System memorizes whole training set and classifies only items that have exactly same values as in training set. Obvious disadvantage is there will be a lot of  unclassified objects. The “next generation” of the concept says the classification occurs using the value of the nearest point in dataset. Comparing to previous way it is a huge difference, but still – system is vulnerable to noise and outliers.

KNN is (comparing to previous strategies) a bit more sophisticated. Algorithm finds a group of k-objects in training set under the condition of “distance” and according to the findings classifies the new object to the previously given class (cluster), respecting weights set to neighbours. Important issues are:

  • number of neighbours (it is important because it is in the name of the algo anyway)
  • the meaning of distance
  • training set is the basic

Parameters are very important to the results and I am going to write another post to discuss a little bit more about.

The procedure goes:

  1. Get the training set remembered (and prepared to update dynamically if data comes continuously)
  2. Measure the distance between new object and object to training set, to find the nearest
  3. Use collected information to classify new object

In spite of the fact, building the model using kNN is not very difficult task, costs of classification are relatively high. Comparing new object with whole training set (lazy learning) is responsible for that and it is especially visible in large datasets. There are some techniques that reduce the amout of computation – from simply editing training set (sometimes results are even better than classification with larger database) to proximity graphs.

Sources: [Top 10 algorithms in data mining, Springer 2008]

Article sponsored by Birmingham Limo Hire, best limo hire Birmingham can offer

November 18th, 2016

Posted In: algorithm, web content, web mining, YouTube

Leave a Comment

PageRank – Larry Page’s algorithm –  is probably the most popular and well-known use of web linkage mining. This non-context  approach is simply a popularity contest, where the importance of the ‘vote’ is measured by the importance of the originating site itself. Better the linking (my page) site is, bigger gain in the rating I get. Looking inside, the importance of the site is measured by the probability of visiting the site, the way to get the digits is google’s secret, obviously (I bet naive Bayes is used somewhere there;).

What about reality? PageRank is vulnerable to spamming and a lot of people cheat PR for a living. For short, farm of sites (servicer) is created and it’s coordinated work pulls target site up in the ranking. It is also language problem how to deal with ambiguous keywords. Then, technical problem – solved more or less fine of course by taxation mechanism – with pages with no further linkage (PR value thieves as the PR popularity flows there and stays forever). The random jumping also helps with dead-end sites. Prediction mechanisms are also worth mentioning as well as using local resources to save some time and computing power, e.g. processing data for whole domain or server.

There are some modifications of the Pagerank algorithm. Interesting one is topic-specified pagerank by T. Haveliwala. There were contexts added (topic-specified groups, like DMOZ) and the idea is to keep results close to previously specified topic. The big advantage of this approach is that personalization of the search process can be easily applied (user-specified popularity ranking and not the general one).

November 18th, 2016

Posted In: pagerank, web content, web mining, YouTube

Leave a Comment

K-means algorithm (centroid algorithm; LGB) is a simple algorithm to partition given data into clusters. The main purpose is to keep the similarity of the data and simultanously to keep the minimal error. K-means is greedy algo type. The main idea is: choose randomly c clusters and then set all the objects that are close to the randomly chosen cluster to it’s class. Then update the center of the set of objects – relocate the central point, and again check if all objects are clustered to the nearest centroid. If not – update. Repeat acordingly.

The k-means complexity is O(cni). The main disadvantages is outliers and noise  can influence the result badly. The solution to the first problem is not to use means but medoids, which are not very sensitive to outliers (we just take the center of the set of data, not mean). There is also a danger to stuck in the local optimum – using k-means algo we always have to start several times using different starting sets of starting points.

However, k-means algorithm is the most popular partition algorithm. It is simple, easy to implement and scalable. Moreover, it is possible to use it with dynamic data. Improvement of k-means algorithm is generally connected with making it suitable for very large datasets – the best tries are kd-trees or triangular inequality.

Posted on the January 23rd, 2010 under definitions,web mining by admin

Looking at both data mining and web mining, there’s one main difference. When data mining operates generally on the content, web mining uses also structure of the data:

  • WEB CONTENT MINING operates on the content, most of the time it’s text (maybe that’s why WM is called also text mining)
  • WEB LINKAGE MINING is getting information from the structure of sites.
  • WEB USAGE MINING is getting information from logs – i.e. tracking user’s movement from one page to another

It gives a certain amount of possibilities that doesn’t count in usual data mining.

Article sponsored by Regina Pro Plumbers, best plumbing Regina has got

November 17th, 2016

Posted In: web content, web mining, YouTube

Leave a Comment

It is a kind of obvious statement that the motor of every social-networking site are content creators. Each owner of social-networking site knows, that it is only a machine what he provides, leaving the “stream of life” in the hands (and keyboards) of the most active users. Nothing says more than digits – my research shows that only 0.5% of all users of my S-N site are responsible for 38% content created!

From the business point of view it is critical to have such users. The situation when everybody wants to eat, but there is nobody to plant crops the result is starvation for the most of the society. It is also said that valuable content has magnetism within, attracting both users and search engines.

Hunting content creators should be high on the list of TO DO things after starting UCC website. Connecting dots, content creating in S-N and my interests in data mining, resulted with an idea to use data mining to discover users, who might be better than average content suppliers.

How to do it, having 8-years-old Internet board database, full of profile information, over 3200 users and over 115k posts? How will it affect the life of society?  What is the reliability of the research? And finally, what is the point (where are money)?

As usual, a lot of questions and answers given in probability measure. Revealing next part of the picture in the following part.

Part 2

As I’ve written in previous part above – content creators part 1, discovering ubercreators and exploiting this knowledge should be an important part of the development of every social-networking site.

My project (idea) is to set up a system to find content creators in functioning Internet board, using data mining algorithms. Some details:

  • database (MySQL) with over 3k users and describing parameters (about 70),
  • selection of the parameters describing users must be executed (manual – technically it comes to selection of the tables in the database, the process could be automated if necessary)
  • Weka is used as a set of classifiers and clustering algorithms (it is necessary to prepare data for both program and algorithm)

Content creating in discussion board is not really complex issue. Although it is difficult to evaluate value of the messages, in most cases it is not even necessary. It is enough to eliminate obvious cases of spamming and just let the snowball rolling down the hill.

In the certain moment, discovering users with hidden potential to create valuable content can give evolving society a serious boost. Giving an algorithm set of users with parameters, with an emphasis on those parameters describing activity and “creative spirit”, algorithm does the rest of the job, clustering users into groups with high level of similarity. The point is to use results of classification to give positive feedback to possible creators, to exploit potential.

The most reliable way to measure results is implementing model in real-life system. However, it is also necessary to try some modelling, because walking in the dark without even predicting (flashlight) if it is going to succeed is unacceptable in every business. Success means in this case having quick development of the network society with a visible grow of the valuable content and SEO parameters.

Content creators in social-networking sites part 1

Next chapter covers the issue of the chosen parameters, algorithm and modelling.

Article sponsored by No 1 Southampton Removals, best removals in Southampton

November 17th, 2016

Posted In: hunting content, web content, web mining, YouTube

Leave a Comment

Web content mining is a part of data mining domain that is the closest one to the classic definition of DM. Web content mining aspects are related to the similar domains in classic data mining.

  • automatic content extraction from web pages
  • integration of the information
  • opinion and reviews extraction
  • knowledge synthesis
  • noise detection and segmentation

Briefly said, web content mining listed above are solutions for more or less complicated problems or issues, connected to automation of web usage, which lead to the improvement in several aspects of Internet daily life, considering both technical and non-technical matters.

Web mining is generally a data mining branch. Introducing Web mining I want to take one step back and present some thoughts about data mining.

Data mining or data exploration is set of techniques used to automatically discover non-trivial relations, patterns and schemes in large data collections. In other words, we are looking for deep-hidden knowledge in very large datasets (in web mining case – the Internet), and we only accept automatic solutions. Why? For better understanding. Having the mechanism, we can ask much more difficult questions (comparing to i.e. sql).

At this point, we can say that web mining is data mining with the Internet as the dataset.

Let’s take a short look at the appliance of web mining:

  • data classification (i.e. customers’ sentiment,  reviews…)
  • natural language processing (NLP, but don’t confuse with neuro-linguistic programming)
  • www personalization
  • knowledge management

Sources:

wazniak.mimuw.edu.pl – data mining (.pps), wikipedia, Bing Liu – Web Mining 2005 Tutorial.

Article sponsored by Power Flush Services London, best radiator flush company in London

November 15th, 2016

Posted In: introduction, mining, web content, YouTube

Leave a Comment

« Previous Page