Learning cxx

using namespace std;

container

tuple
A tuple is an object capable to hold a collection of elements. Each element can be of a different type.
stringstream
需要注意的是, stringstream.str()返回一个临时对象, 因此如果再不伦不类的配合c_str()来用, 就是错误的. 如这个例子:
const char * stupid = ss.str().c_str(); // WRONG!

boost::MultiArray

method

lower_bound(first, last, val, operator<)
Returns an iterator pointing to the first element in the range [first,last) which does not compare less than: >= val. See also:
upper_bound
prev(iterator, n=1)
Returns an iterator pointing to the element that it would be pointing to if advanced -n positions. See also:
next advance distance
max_element(first, last, operator<)
Finds the greatest element in the range [first, last). See also:
min_element
accumulate(first, last, init, BinaryOperation/functor/lambda...)
Computes the sum of the given value init and the elements in the range [first, last). The first version uses operator+ to sum up the elements, the second version uses the given binary function op. See also:
transfrom(inFirst, inLast, outFirst, operator)
for_each(first, last, UnaryFunction/lambda...)
copy copy_backward remove_copy
fill
fill_n
generate
replace
toupper
bind(F&& f, Args&&... args), placeholder: _1, _2...
(函数偏特化) The function template bind generates a forwarding call wrapper for f. Calling this wrapper is equivalent to invoking f with some of its arguments bound to args. See also:
bind1st mem_fn function decltype

Concurrent: A lecture posted on youtube by Qian Bo

clang++ -DDBG -D_GLIBCXX_USE_NANOSLEEP -Wall -Wextra -Werror -Wshadow -std=c++11 -lpthread -g -o $TARGET $SRC

Thread

std::thread t1((Fctor()), std::move(s))

  • 参数总是以值传递的(即使以&修饰), 若想传引用, 需std::refstd::move. 类似禁止拷贝的还有std::thread t2=std::move(t1) mutex unique_lock promise future, 注意lock_guard不能被move
  • 若是仿函数, 则需要额外加一层括号( Fctor() )
  • 相关操作: t1.join() t1.detach() std::this_thread::get_id(), using std::thread::hardware_concurrent() to avoid oversubscription.

Avoiding data race

  • Synchronize data access: std::mutex mu_; std::lock_guard<std::mutex> guard(mu_); // RAII
  • Never leak a handle of data to outside;
  • Design interface appropriately.

Avoiding deadlock

  • Prefer locking single mutex, 一个好的锁只锁定最小的临界区
  • Avoid locking a mutex and then calling a user provided function(because you will never know if the user function will lock another mutex);
  • Lock the mutex in same order;
  • Use std::lock() to lock more than one mutex :
1
2
3
std::lock(mu1, mu2);
std::lock_guard<mutex> locker1(mu1, std::adopt_lock);
std::lock_guard<mutex> locker2(mu2, std::adopt_lock);

Unique lock and lazy initialization

  • unique lock std::mutex.lock()/std::mutex.unlock()是不被推荐的: 不但要写两次, 还有可能中途遇到exception而无法unlock); 除了上边说过的lockguard的RAII技术, 还有通过`std::unique_lock locker(mu, std::defer_lock); ` 来频繁的lock/unlock(也可以不需要这俩), 这样就不用在减小临界区时不断的添加那一对对的{}.
    注意:虽然扩展性更好, 但带来了稍多的开销, 故不能滥用
  • singleton/log::open(“file.log”) 会遇到只打开一次某资源的情形, 使用double check技术可能都不保险(cpu乱序执行). 这时候没有什么比使用std::once_flag onc_更让人开心: 可靠, 高效, 简单. std::call_once(onc_, [&](){ log_.open("file.log"); });

Condition variable

信号量是进程间通信机制, 而条件变量则是线程间通信机制.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// 生产者-消费者
std::deque<int> q;
std::mutex mu;
std::condition_variable cond;
void producer () {
for (int count=10; count>0; --count) {
std::unique_lock<mutex> locker(mu);
q.push_front(count);
locker.unlock();
cond.notify_one(); // or cond.notify_all()
std::this_thread::sleep_for(chrono::milliseconds(30));
}
}
void consumer() {
for (; true; ) {
std::unique_lock<mutex> locker(mu);
// 考虑这里为何需要locker参数, 以及为何locker必须为unique_lock, 还有为何需要第二个参数(这里为lambda函数)
cond.wait(locker, [](){return !q.empty(); });
int data=q.back();
q.pop_back();
locker.unlock();
cout << std::this_thread::get_id() << " got a value: " << data << endl;
}
}

注释中的理由: 考虑wait的内部实现, 其实它本身是sleep了, 然后通过notify来唤醒. 毫无意外的, 它必须一开始调用unlock()来避免额外地占用锁, 并且在唤醒后重新lock()来获取锁. 而这一步的过程可能出现suprious sleep, 在重新获得锁之后可能唤醒条件又不成立了: q非空, 所以需要第二个函数参数来再次确定.

Future, Promise and async

都是异步机制(基于线程)的通信工具

future:子线程返回给主线程

1
2
3
4
5
6
int factorialR(int N) { return N<=1 ? 1 : N*factorialR(N-1); }
int main() {
std::future<int> fu=std::async(std::launch::deferred/*std::launch::async*/, factorialR, 4);
int x = fu.get(); // call once only
return 0;
}

promise:主线程传递给子线程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
int ffactorial(std::future<int> &f) {
int N = f.get(); // call once only
int ans = 1;
for (int i=2; i<=N; ++i) ans *= i;
return ans;
}
// can be copied!!!因此, 可用来对各子线程进行广播, 而f只能在一个线程上使用
int sfactorial(std::shared_future<int> f) {
int N = f.get(); // get promise or exception: future_err::broken_promise
int ans = 1;
for (int i=2; i<=N; ++i) ans *= i;
return ans;
}
void testFuture() {
std::promise<int> p; // both promise & future only be moved but not copied/assigned...
std::future<int> f = p.get_future();
std::future<int> fu = std::async(std::launch::deferred, ffactorial, std::ref(f));
std::this_thread::sleep_for(chrono::milliseconds(20));// so something else
p.set_value(4);
int x = fu.get(); // gocha!
cout << "get from child: " << x << endl;
}
void testSharedFuture() {
std::promise<int> p;
std::shared_future<int> sf = p.get_future();
std::future<int> fu1 = std::async(std::launch::deferred, sfactorial, sf);
std::future<int> fu2 = std::async(std::launch::deferred, sfactorial, sf);
p.set_value(5); // make promise or p.set_exception(std::make_exception_ptr(std::runtime_error("human err")));
int x1 = fu1.get();
cout << "get from child1: " << x1 << endl;
// p.set_value(3); set againt is not allowed
int x2 = fu2.get();
cout << "get from child: " << x2 << endl;
}

packaged_task

The class template std::packaged_task wraps any callable target (function, lambda expression, bind expression, or another function object) so that it can be invoked asynchronously. Its return value or exception thrown is stored in a shared state which can be accessed through std::future objects.
Just like std::function, std::packaged_task is a polymorphic, allocator-aware container: the stored callable target may be allocated on heap or with a provided allocator.
顾名思义, 可以用来是实现任务队列.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
std::deque<std::packaged_task<int()>> task_q;
std::mutex mu;
std::condition_variable cond;
void thread_foo() {
std::packaged_task<int()>tak;
{ std::lock_unique<std::mutex> locker(mu);
cond.wait(locker, [](){ return !task_q.empty()});
tas = std::move(task_q.front());
task_q.pop_front();
}
tas();
}
int main() {
std::thread t1(thread_foo);
std::packaged_task<int(/*函数signature*/)> tas(bind(factorialR, 6)); // 使用bind再适合不过了
std::future<int> fu = tas.get_future();
{ std::lock_guard<std::mutex> locker(mu);
task_q.push_back(std::move(tas)); // 考虑为何用 move
}
cond.notify_one();
cout << fu.get();
return 0;
}

3 ways to get a future:

  • promise::get_future()
  • packaged_task::get_future()
  • async returns a future

summary & DBG(gdb)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
int main() {
/* thread */
std::thread t1(factorialR, 6);
std::this_thread::sleep_for(chrono::milliseconds(20));// so something else
chrono::steady_clock::time_point tp = chrono:steady_clock::now() + chrono::microseconds(4);
std::this_thread::sleep_until(tp);
/* Mutex */
std::mutex mu; // mu.lock()/unlock() 不怎么用
std::lock_guard<mutex> locker(mu);
std::unique_lock<mutex> ulocker(mu); // 可多次调用lock()/unlock()
ulocker.lock();
ulocker.try_lock();
ulocker.try_lock_for(chrono::nanoseconds(500));
ulocker.try_lock_until(tp);
/* Condition Variable */
std::condition_variable cond;
cond.wait(ulocker);
cond.wait_for(ulocker, chrono::microseconds(2));
cond.wait_until(ulocker, tp);
/* Future and Promise */
std::promise<int> p;
std::future<int> f=p.get_future();
f.get(); // 内部会调用 f.wait()
f.wait(); // 类似也有 f.wait_for() f.wait_until();
/* async() */
std::future<int> fu = async(factorialR, 6);
/* packaged_task */
std::packaged_task<int(int)> t(factorialR);
std::future<int> fu2 = g.get_future();
t(6);
return 0;
}

DBG(gdb):

  • directory
  • info thread
  • thread $ID$
  • break $file.cpp:LINE$ thread all
  • set scheduler-locking off|on|step