页面排名算法计算错页面排名

问题描述 投票:0回答:1

我正在尝试实现page rank算法。

我总共有5个网页(见下图)。下图代表一个图表,显示哪个网页包含指向哪个页面的链接。

enter image description here

我已经将这种网页链接存储在HashMap中,使得每个网页的唯一链接存储为key,并且包含给定网页指向的网页的所有链接的HashSet存储为该键的值。 (见下图)

enter image description here

每个网页都由其唯一链接表示。上面提到的HashMap在代码中表示为

HashMap<URI, HashSet<URI>> graph = new HashMap<>();

我选择了decay值等于0.85epsilon等于0.00001

问题

生成上面提到的Hashmap之后,我正在计算每个网页的page rank

最终的融合页面排名应该是

enter image description here

但我的实际页面排名是

Page A = 0.3170604814815385
Page B = 0.18719407056490575
Page C = 0.13199010955519944
Page D = 0.31131469834360015
Page E = 0.05244064005475638

每页的实际值都可以,因为除了epsilon之外,每页的实际值和期望值之间的差值小于选定的Page D值。

我尝试了这个page rank算法的不同输入,无论我尝试什么,我总是有一个或两个页面排名值不正确的网页。算法在所有页面收敛页面排名之前返回,即每个页面的旧排名和新排名之间的差异小于epsilon值。

我究竟做错了什么?为什么我的页面排名算法会在所有页面聚合之前返回页面排名?

Code

以下函数生成生成上图中显示的HashMap

private static HashMap<URI, HashSet<URI>> makeGraph(HashSet<WebPage> webpages) {
        HashMap<URI, HashSet<URI>> webPagesGraph = new HashMap<>();
        HashSet<URI> singleWebPageLinks;

        HashSet<URI> availableWebPages = new HashSet<>();

        // add all the web pages available in data set in a collection
        for (WebPage doc : webpages) {
            availableWebPages.add(doc.getUri());
        }

        for (WebPage doc : webpages) {
            singleWebPageLinks = new HashSet<>();
            for (URI link : doc.getLinks()) {
                // if link is not pointing to the web page itself and is available in data set
                if (!link.equals(doc.getUri()) && availableWebPages.contains(link)) {
                    singleWebPageLinks.add(link);
                }
            }

            webPagesGraph.put(doc.getUri(), singleWebPageLinks);
        }

        return webPagesGraph;
}

以下函数计算页面排名

private static HashMap<URI, Double> makePageRanks(HashMap<URI, HashSet<URI>> graph,
                                                   double decay,
                                                   int limit,
                                                   double epsilon) {

        // Step 1: The initialize step should go here
        HashMap<URI, Double> oldPageRanks = new HashMap<>();
        HashMap<URI, Double> newPageRanks = new HashMap<>();

        double singleWebPageNewRank;
        int numLinkedPagesBySinglePage;
        double singleWebPageOldRank;
        boolean haveConverged = true;
        double rank;

        // provide ranks to each web page
        // initially the rank given to each page is 1/(total no. of web pages).
        // also give new page rank to each page equal to zero
        for (URI key : graph.keySet()) {
            oldPageRanks.put(key, (double) 1 / graph.size());
            newPageRanks.put(key, 0.0);
        }

        for (int i = 0; i < limit; i++) {
            // Step 2: The update step should go here

            for (URI uri : graph.keySet()) {

                singleWebPageOldRank = oldPageRanks.get(uri);

                numLinkedPagesBySinglePage = graph.get(uri).size();

                // if any web page doesn't have any outgoing links to any other
                // web page, increase the new page rank for every web page
                if (numLinkedPagesBySinglePage == 0) {
                    for (URI u : newPageRanks.keySet()) {
                        singleWebPageNewRank = decay * (singleWebPageOldRank / graph.size());
                        saveNewRank(newPageRanks, u, singleWebPageNewRank);
                    }
                } // increase the new page rank of every web page that is pointed to
                // by current web page
                else {
                    for (URI linkedWebPageURI : graph.get(uri)) {
                        singleWebPageNewRank = decay * (singleWebPageOldRank / numLinkedPagesBySinglePage);
                        saveNewRank(newPageRanks, linkedWebPageURI, singleWebPageNewRank);
                    }
                }
            }

            // account for random user/surfer by adding (1 - decay) / (total no. of web pages)
            // to each web page's new rank
            for (URI uri : newPageRanks.keySet()) {
                rank = newPageRanks.get(uri);
                rank = rank + ((1 - decay) / graph.size());
                newPageRanks.put(uri, rank);

                // check for convergence
                // check if difference b/w old rand and new rank for each web page
                // is less than epsilon or not
                // if difference between old and new ranks is greater than or
                // equal to epsilon even for one web page, ranks haven't converged
                if (oldPageRanks.get(uri) - newPageRanks.get(uri) >= epsilon) {
                    haveConverged = false;
                }
            }

            if (haveConverged) {
                return oldPageRanks;
            } else {
                haveConverged = true;
                overWriteOldRanksWithNewRanks(oldPageRanks, newPageRanks);
            }
        }

        return oldPageRanks;
    }

以下两个函数是从makePageRanks函数中调用的实用函数

// save the new page rank for a given web page by adding the passed new page rank to
// its previously saved page rank and then saving the new rank
private static void saveNewRank(HashMap<URI, Double> newPageRanks, URI pageURI, double pageNewRank) {
      pageNewRank += newPageRanks.get(pageURI);
      newPageRanks.put(pageURI, pageNewRank);
}

// overwrite old page ranks for next iteration
private static void overWriteOldRanksWithNewRanks(HashMap<URI, Double> oldRanks, HashMap<URI, Double> newRanks) {
    for (URI key : newRanks.keySet()) {
        oldRanks.put(key, newRanks.get(key));
        // make new rank for each web page equal to zero before next iteration
        newRanks.put(key, 0.0);
    }
}

以下是简单的WebPage类

public class WebPage {

    private ArrayList<String> words;
    private URI uri;
    private ArrayList<URI> links;

    WebPage(URI uri, ArrayList<String> words, ArrayList<URI> links) {
        this.words = words;
        this.uri = uri;
        this.links = links;
    }

    public ArrayList<String> getWords() {
        return words;
    }

    public URI getUri() {
        return uri;
    }

    public ArrayList<URI> getLinks() {
        return links;
    } 
}

最后main方法,任何人想要看到我给页面排名算法的输入

public static void main(String[] args) {
        ArrayList<URI> pageALinks = new ArrayList<>();
        pageALinks.add(createURI("www.b.com"));
        pageALinks.add(createURI("www.d.com"));
        URI pageAURI = createURI("www.a.com");
        WebPage pageA = new WebPage(pageAURI, new ArrayList<>(), pageALinks);


        ArrayList<URI> pageBLinks = new ArrayList<>();
        pageBLinks.add(createURI("www.c.com"));
        pageBLinks.add(createURI("www.d.com"));
        URI pageBURI = createURI("www.b.com");
        WebPage pageB = new WebPage(pageBURI, new ArrayList<>(), pageBLinks);


        ArrayList<URI> pageCLinks = new ArrayList<>();
        URI pageCURI = createURI("www.c.com");
        WebPage pageC = new WebPage(pageCURI, new ArrayList<>(), pageCLinks);


        ArrayList<URI> pageDLinks = new ArrayList<>();
        pageDLinks.add(createURI("www.a.com"));
        URI pageDURI = createURI("www.d.com");
        WebPage pageD = new WebPage(pageDURI, new ArrayList<>(), pageDLinks);


        ArrayList<URI> pageELinks = new ArrayList<>();
        pageELinks.add(createURI("www.d.com"));
        URI pageEURI = createURI("www.e.com");
        WebPage pageE = new WebPage(pageEURI, new ArrayList<>(), pageELinks);


        HashSet<WebPage> pages = new HashSet<>();
        pages.add(pageA);
        pages.add(pageB);
        pages.add(pageC);
        pages.add(pageD);
        pages.add(pageE);


        HashMap<URI, HashSet<URI>> graph = makeGraph(pages);
        HashMap<URI, Double> map = makePageRanks(graph, 0.85, 100, 0.00001); 
}
java algorithm pagerank
1个回答
3
投票

简介:您正在测试错误的值。您必须减少代码的epsilon值,以使页面排名达到所需值的0.00001。 0.00001内的两次连续猜测并不意味着结果。

除了我在评论中提到的问题,我相信我看到了你的问题。这是融合中的概念问题。似乎单元测试的要求是收敛到预定值的epsilon内。你没有为此编写算法。你的考试

if (oldPageRanks.get(uri) - newPageRanks.get(uri) >= epsilon)

检查两个连续的近似值是否在该值内。这并不能保证新页面排名在最终值的epsilon内。对于猜测x和参考(正确)点z,“close”邻域的微积分/拓扑定义读取如下所示。

abs(x - z) < delta  ==>  abs(f(x) - f(z)) < epsilon

你可能会混淆deltaepsilon

如果近似函数的梯度超出范围[-1,+ 1],那么您可能会错过这个错误。你需要找到它所持有的delta值,然后使用该数量代替当前的epsilon。这是您输入函数的epsilon值的简单更改。

© www.soinside.com 2019 - 2024. All rights reserved.