本文介绍了boost :: asio,线程池和线程监视的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用 boost :: asio 和一些数字 boost :: thread 对象实现了一个线程池调用 boost :: asio :: io_service :: run()。然而,我已经给出的要求是有一种方法来监视所有线程的健康。我的目的是使一个简单的sentinel对象可以通过线程池传递 - 如果它通过,那么我们可以假设线程仍在处理工作。

I've implemented a thread pool using boost::asio, and some number boost::thread objects calling boost::asio::io_service::run(). However, a requirement that I've been given is to have a way to monitor all threads for "health". My intent is to make a simple sentinel object that can be passed through the thread pool -- if it makes it through, then we can assume that the thread is still processing work.

然而,给定我的实现,我不知道如何(如果)我可以监视池中的所有线程可靠。我只是将线程函数委托给 boost :: asio :: io_service :: run(),所以发布一个sentinel对象到 io_service 实例不会保证哪个线程会真正得到该哨兵并做工作。

However, given my implementation, I'm not sure how (if) I can monitor all the threads in the pool reliably. I've simply delegated the thread function to boost::asio::io_service::run(), so posting a sentinel object into the io_service instance won't guarantee which thread will actually get that sentinel and do the work.

一个选项可能是定期插入哨兵,希望在一段合理的时间内每个线程至少获取一次,但这显然是不理想的。

One option may be to just periodically insert the sentinel, and hope that it gets picked up by each thread at least once in some reasonable amount of time, but that obviously isn't ideal.

以下面的例子。由于处理程序被编码的方式,在这个例子中,我们可以看到每个线程将执行相同的工作量,但是实际上我不会有对处理程序实现的控制,一些可以长时间运行,而其他几乎将立即。

Take the following example. Due to the way that the handler is coded, in this instance we can see that each thread will do the same amount of work, but in reality I will not have control of the handler implementation, some can be long running while others will be almost immediate.

#include <iostream>
#include <boost/asio.hpp>
#include <vector>
#include <boost/thread.hpp>
#include <boost/bind.hpp>

void handler()
{
   std::cout << boost::this_thread::get_id() << "\n";
   boost::this_thread::sleep(boost::posix_time::milliseconds(100));
}

int main(int argc, char **argv)
{
   boost::asio::io_service svc(3);

   std::unique_ptr<boost::asio::io_service::work> work(new boost::asio::io_service::work(svc));

   boost::thread one(boost::bind(&boost::asio::io_service::run, &svc));
   boost::thread two(boost::bind(&boost::asio::io_service::run, &svc));
   boost::thread three(boost::bind(&boost::asio::io_service::run, &svc));

   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);
   svc.post(handler);

   work.reset();

   three.join();
   two.join();
   one.join();

   return 0;
}


推荐答案

事实上我自己实现了胎槽对象。我创建了一个包装类型,它将更新统计信息,并复制发布到线程池的用户定义的处理程序。只有这个包装类型被发布到底层的 io_service 。这种方法允许我跟踪发布/执行的处理程序,而不必侵入用户代码。

The solution that I used relies on the fact that I own the implementation of the tread pool objects. I created a wrapper type that will update statistics, and copy the user defined handlers that are posted to the thread pool. Only this wrapper type is ever posted to the underlying io_service. This method allows me to keep track of the handlers that are posted/executed, without having to be intrusive into the user code.

这里有一个简化和简化的例子: / p>

Here's a stripped down and simplified example:

#include <iostream>
#include <memory>
#include <vector>
#include <boost/thread.hpp>
#include <boost/asio.hpp>

// Supports scheduling anonymous jobs that are
// executable as returning nothing and taking
// no arguments
typedef std::function<void(void)> functor_type;

// some way to store per-thread statistics
typedef std::map<boost::thread::id, int> thread_jobcount_map;

// only this type is actually posted to
// the asio proactor, this delegates to
// the user functor in operator()
struct handler_wrapper
{
   handler_wrapper(const functor_type& user_functor, thread_jobcount_map& statistics)
      : user_functor_(user_functor)
      , statistics_(statistics)
   {
   }

   void operator()()
   {
      user_functor_();

      // just for illustration purposes, assume a long running job
      boost::this_thread::sleep(boost::posix_time::milliseconds(100));

      // increment executed jobs
      ++statistics_[boost::this_thread::get_id()];
   }

   functor_type         user_functor_;
   thread_jobcount_map& statistics_;
};

// anonymous thread function, just runs the proactor
void thread_func(boost::asio::io_service& proactor)
{
   proactor.run();
}

class ThreadPool
{
public:
   ThreadPool(size_t thread_count)
   {
      threads_.reserve(thread_count);

      work_.reset(new boost::asio::io_service::work(proactor_));

      for(size_t curr = 0; curr < thread_count; ++curr)
      {
         boost::thread th(thread_func, boost::ref(proactor_));

         // inserting into this map before any work can be scheduled
         // on it, means that we don't have to look it for lookups
         // since we don't dynamically add threads
         thread_jobcount_.insert(std::make_pair(th.get_id(), 0));

         threads_.emplace_back(std::move(th));
      }
   }

   // the only way for a user to get work into
   // the pool is to use this function, which ensures
   // that the handler_wrapper type is used
   void schedule(const functor_type& user_functor)
   {
      handler_wrapper to_execute(user_functor, thread_jobcount_);
      proactor_.post(to_execute);
   }

   void join()
   {
      // join all threads in pool:
      work_.reset();
      proactor_.stop();

      std::for_each(
         threads_.begin(),
         threads_.end(),
         [] (boost::thread& t)
      {
         t.join();
      });
   }

   // just an example showing statistics
   void log()
   {
      std::for_each(
         thread_jobcount_.begin(),
         thread_jobcount_.end(),
         [] (const thread_jobcount_map::value_type& it)
      {
         std::cout << "Thread: " << it.first << " executed " << it.second << " jobs\n";
      });
   }

private:
   std::vector<boost::thread> threads_;
   std::unique_ptr<boost::asio::io_service::work> work_;
   boost::asio::io_service    proactor_;
   thread_jobcount_map        thread_jobcount_;
};

struct add
{
   add(int lhs, int rhs, int* result)
      : lhs_(lhs)
      , rhs_(rhs)
      , result_(result)
   {
   }

   void operator()()
   {
      *result_ = lhs_ + rhs_;
   }

   int lhs_,rhs_;
   int* result_;
};

int main(int argc, char **argv)
{
   // some "state objects" that are
   // manipulated by the user functors
   int x = 0, y = 0, z = 0;

   // pool of three threads
   ThreadPool pool(3);

   // schedule some handlers to do some work
   pool.schedule(add(5, 4, &x));
   pool.schedule(add(2, 2, &y));
   pool.schedule(add(7, 8, &z));

   // give all the handlers time to execute
   boost::this_thread::sleep(boost::posix_time::milliseconds(1000));

   std::cout
      << "x = " << x << "\n"
      << "y = " << y << "\n"
      << "z = " << z << "\n";

   pool.join();

   pool.log();
}

输出:

x = 9
y = 4
z = 15
Thread: 0000000000B25430 executed 1 jobs
Thread: 0000000000B274F0 executed 1 jobs
Thread: 0000000000B27990 executed 1 jobs

这篇关于boost :: asio,线程池和线程监视的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-01 01:03