本文介绍了mgo - 查询性能似乎一直很慢(500-650ms)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的数据层使用Mongo聚合体面的数量,平均来说,查询需要500-650ms才能返回。我正在使用 mgo



下面显示了一个示例查询函数,它代表了我的大多数查询的样子。


$ b $ pre $ func(r userRepo)GetUserByID(id string)(User,error){
info,err:= db .Info()
if err!= nil {
log.Fatal(err)
}

session,err:= mgo.Dial(info.ConnectionString( ))
if err!= nil {
log.Fatal(err)
}
defer session.Close()

var user User
c:= session.DB(info.Db())。C(users)
o1:= bson.M {$ match:bson.M {_ id:id}}
o2:= bson.M {$ project:bson.M {
first:$ first,
last:$ last,
email :$ email,
fb_id:$ fb_id,
groups:$ groups,
履行:$履行,
denied_requests:$ denied_requests,
邀请:$邀请 ,
requests:bson.M {
$ filter:bson.M {
input:$ requests,$ b $as:item ,
cond:bson.M {
$ eq:[] interface {} {$$ item.active,true},
},
} ,
},
}}
pipeline:= [] bson.M {o1,o2}
err = c.Pipe(pipeline).One(&user)
如果err!= nil {
返回用户,err
}
返回用户,无
}
$ b $ 用户结构我看起来像下面这样..

 类型User结构{
ID字符串`json:idbson:_ id,omitempty`
第一个字符串`json:firstbson:
最后一个字符串`json:lastbson:last`
电子邮件字符串`json:emailbson:email`
FacebookID字符串`json: facebook_idbson:fb_id,omite mpty`
Groups [] UserGroup`json:groupsbson:groups`
Requests []请求`json:requestsbson:requests`
Fulfillments [] json:fulfillmentsbson:fulfillments`
Invites [] GroupInvite`json:invitesbson:invites`
DeniedRequests [] string`json:denied_requestsbson: denied_requests`
}

基于我所提供的,是否有任何明显的建议为什么我的查询平均为500-650毫秒?



我知道使用聚合管道可能会吞噬一点性能,但我不希望它这是不好的。

解决方案

是的,有。您正在致电在执行每个查询之前。 mgo.Dial()必须每次都连接到MongoDB服务器,在查询后关闭。连接可能需要几百毫秒才能建立,包括身份验证,分配资源(在服务器端和客户端)等等,这是非常浪费的。

创建全局会话变量,在启动时连接 once (例如使用一个包 init()函数),并使用该会话(或者它的副本/克隆,通过或)。
例如:

  var session * mgo.Session 
var info * db.Inf // Use你的类型在这里

func init(){
var err错误
if info,err = db.Info(); err!= nil {
log.Fatal(err)
}
if session,err = mgo.Dial(info.ConnectionString()); err!= nil {
log.Fatal(err)
}
}

func(r userRepo)GetUserByID(id string)(User,error){
sess:= session.Clone()
推迟sess.Close()

//现在我们使用sess执行查询:
var user User
c:= sess.DB(info.Db())。C(users)
//方法的其余部分不变...
}


My data layer uses Mongo aggregation a decent amount, and on average, queries are taking 500-650ms to return. I am using mgo.

A sample query function is shown below which represents what most of my queries look like.

func (r userRepo) GetUserByID(id string) (User, error) {
    info, err := db.Info()
    if err != nil {
        log.Fatal(err)
    }

    session, err := mgo.Dial(info.ConnectionString())
    if err != nil {
        log.Fatal(err)
    }
    defer session.Close()

    var user User
    c := session.DB(info.Db()).C("users")
    o1 := bson.M{"$match": bson.M{"_id": id}}
    o2 := bson.M{"$project": bson.M{
        "first":           "$first",
        "last":            "$last",
        "email":           "$email",
        "fb_id":           "$fb_id",
        "groups":          "$groups",
        "fulfillments":    "$fulfillments",
        "denied_requests": "$denied_requests",
        "invites":         "$invites",
        "requests": bson.M{
            "$filter": bson.M{
                "input": "$requests",
                "as":    "item",
                "cond": bson.M{
                    "$eq": []interface{}{"$$item.active", true},
                },
            },
        },
    }}
    pipeline := []bson.M{o1, o2}
    err = c.Pipe(pipeline).One(&user)
    if err != nil {
        return user, err
    }
    return user, nil
}

The user struct I have looks like the following..

type User struct {
    ID             string        `json:"id" bson:"_id,omitempty"`
    First          string        `json:"first" bson:"first"`
    Last           string        `json:"last" bson:"last"`
    Email          string        `json:"email" bson:"email"`
    FacebookID     string        `json:"facebook_id" bson:"fb_id,omitempty"`
    Groups         []UserGroup   `json:"groups" bson:"groups"`
    Requests       []Request     `json:"requests" bson:"requests"`
    Fulfillments   []Fulfillment `json:"fulfillments" bson:"fulfillments"`
    Invites        []GroupInvite `json:"invites" bson:"invites"`
    DeniedRequests []string      `json:"denied_requests" bson:"denied_requests"`
}

Based on what I have provided, is there anything obvious that would suggest why my queries are averaging 500-650ms?

I know that I am probably swallowing a bit of a performance hit by using aggregation pipeline, but I wouldn't expect it to be this bad.

解决方案

Yes, there is. You are calling mgo.Dial() before executing each query. mgo.Dial() has to connect to the MongoDB server every time, which you close right after the query. The connection may very likely take hundreds of milliseconds to estabilish, including authentication, allocating resources (both at server and client side), etc. This is very wasteful.

Create a global session variable, connect on startup once (using e.g. a package init() function), and use that session (or a copy / clone of it, obtained by Session.Copy() or Session.Clone()).For example:

var session *mgo.Session
var info *db.Inf // Use your type here

func init() {
    var err error
    if info, err = db.Info(); err != nil {
        log.Fatal(err)
    }
    if session, err = mgo.Dial(info.ConnectionString()); err != nil {
        log.Fatal(err)
    }
}

func (r userRepo) GetUserByID(id string) (User, error) {
    sess := session.Clone()
    defer sess.Close()

    // Now we use sess to execute the query:
    var user User
    c := sess.DB(info.Db()).C("users")
    // Rest of the method is unchanged...
}

这篇关于mgo - 查询性能似乎一直很慢(500-650ms)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

11-03 01:54