How to use epoll? A complete example in C

简介:  https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/ Thursday, 2 June 2011 @ 1238 GMT by Muku...

 

https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/

 

Thursday, 2 June 2011 @ 1238 GMT by Mukund Sivaraman

Network servers are traditionally implemented using a separate process or thread per connection. For high performance applications that need to handle a very large number of clients simultaneously, this approach won't work well, because factors such as resource usage and context-switching time influence the ability to handle many clients at a time. An alternate method is to performnon-blocking I/O in a single thread, along with some readiness notification method which tells you when you can read or write more data on a socket.

This article is an introduction to Linux's epoll(7) facility, which is the best readiness notification facility in Linux. We will write sample code for a complete TCP server implementation in C. I assume you have C programming experience, know how to compile and run programs on Linux, and can read manpages of the various C functions that are used.

epoll was introduced in Linux 2.6, and is not available in other UNIX-like operating systems. It provides a facility similar to theselect(2) and poll(2) functions:

  • select(2) can monitor up to FD_SETSIZE number of descriptors at a time, typically a small number determined at libc's compile time.
  • poll(2) doesn't have a fixed limit of descriptors it can monitor at a time, but apart from other things, even we have to perform a linear scan of all the passed descriptors every time to check readiness notification, which is O(n) and slow.

epoll has no such fixed limits, and does not perform any linear scans. Hence it is able to perform better and handle a larger number of events.

An epoll instance is created by epoll_create(2) or epoll_create1(2) (they take different arguments), which return an epoll instance.epoll_ctl(2) is used to add/remove descriptors to be watched on the epoll instance. To wait for events on the watched set,epoll_wait(2) is used, which blocks until events are available. Please see their manpages for more info.

When descriptors are added to an epoll instance, they can be added in two modes:level triggered and edge triggered. When you use level triggered mode, and data is available for reading,epoll_wait(2) will always return with ready events. If you don't read the data completely, and callepoll_wait(2) on the epoll instance watching the descriptor again, it will return again with a ready event because data is available. In edge triggered mode, you will only get a readiness notfication once. If you don't read the data fully, and call epoll_wait(2) on the epoll instance watching the descriptor again, it will block because the readiness event was already delivered.

The epoll event structure that you pass to epoll_ctl(2) is shown below.  With every descriptor being watched, you can associate an integer or a pointer as user data.

typedef union epoll_data
{
  void        *ptr;
  int          fd;
  __uint32_t   u32;
  __uint64_t   u64;
} epoll_data_t;

struct epoll_event
{
  __uint32_t   events; /* Epoll events */
  epoll_data_t data;   /* User data variable */
};

Let's write code now. We'll implement a tiny TCP server that prints everything sent to the socket on standard output. We'll begin by writing a functioncreate_and_bind() which creates and binds a TCP socket:

static int
create_and_bind (char *port)
{
  struct addrinfo hints;
  struct addrinfo *result, *rp;
  int s, sfd;

  memset (&hints, 0, sizeof (struct addrinfo));
  hints.ai_family = AF_UNSPEC;     /* Return IPv4 and IPv6 choices */
  hints.ai_socktype = SOCK_STREAM; /* We want a TCP socket */
  hints.ai_flags = AI_PASSIVE;     /* All interfaces */

  s = getaddrinfo (NULL, port, &hints, &result);
  if (s != 0)
    {
      fprintf (stderr, "getaddrinfo: %s\n", gai_strerror (s));
      return -1;
    }

  for (rp = result; rp != NULL; rp = rp->ai_next)
    {
      sfd = socket (rp->ai_family, rp->ai_socktype, rp->ai_protocol);
      if (sfd == -1)
        continue;

      s = bind (sfd, rp->ai_addr, rp->ai_addrlen);
      if (s == 0)
        {
          /* We managed to bind successfully! */
          break;
        }

      close (sfd);
    }

  if (rp == NULL)
    {
      fprintf (stderr, "Could not bind\n");
      return -1;
    }

  freeaddrinfo (result);

  return sfd;
}

create_and_bind() contains a standard code block for a portable way of getting a IPv4 or IPv6 socket. It accepts aport argument as a string, where argv[1] can be passed.  Thegetaddrinfo(3) function returns a bunch of addrinfo structures inresult, which are compatible with the hints passed in the hints argument. Theaddrinfo struct looks like this:

struct addrinfo
{
  int              ai_flags;
  int              ai_family;
  int              ai_socktype;
  int              ai_protocol;
  size_t           ai_addrlen;
  struct sockaddr *ai_addr;
  char            *ai_canonname;
  struct addrinfo *ai_next;
};

We walk through the structures one by one and try creating sockets using them, until we are able to both create and bind a socket. If we were successful,create_and_bind() returns the socket descriptor. If unsuccessful, it returns -1.

Next, let's write a function to make a socket non-blocking. make_socket_non_blocking() sets theO_NONBLOCK flag on the descriptor passed in the sfd argument:

static int
make_socket_non_blocking (int sfd)
{
  int flags, s;

  flags = fcntl (sfd, F_GETFL, 0);
  if (flags == -1)
    {
      perror ("fcntl");
      return -1;
    }

  flags |= O_NONBLOCK;
  s = fcntl (sfd, F_SETFL, flags);
  if (s == -1)
    {
      perror ("fcntl");
      return -1;
    }

  return 0;
}

Now, on to the main() function of the program which contains the event loop. This is the bulk of the program:

#define MAXEVENTS 64

int
main (int argc, char *argv[])
{
  int sfd, s;
  int efd;
  struct epoll_event event;
  struct epoll_event *events;

  if (argc != 2)
    {
      fprintf (stderr, "Usage: %s [port]\n", argv[0]);
      exit (EXIT_FAILURE);
    }

  sfd = create_and_bind (argv[1]);
  if (sfd == -1)
    abort ();

  s = make_socket_non_blocking (sfd);
  if (s == -1)
    abort ();

  s = listen (sfd, SOMAXCONN);
  if (s == -1)
    {
      perror ("listen");
      abort ();
    }

  efd = epoll_create1 (0);
  if (efd == -1)
    {
      perror ("epoll_create");
      abort ();
    }

  event.data.fd = sfd;
  event.events = EPOLLIN | EPOLLET;
  s = epoll_ctl (efd, EPOLL_CTL_ADD, sfd, &event);
  if (s == -1)
    {
      perror ("epoll_ctl");
      abort ();
    }

  /* Buffer where events are returned */
  events = calloc (MAXEVENTS, sizeof event);

  /* The event loop */
  while (1)
    {
      int n, i;

      n = epoll_wait (efd, events, MAXEVENTS, -1);
      for (i = 0; i < n; i++)
	{
	  if ((events[i].events & EPOLLERR) ||
              (events[i].events & EPOLLHUP) ||
              (!(events[i].events & EPOLLIN)))
	    {
              /* An error has occured on this fd, or the socket is not
                 ready for reading (why were we notified then?) */
	      fprintf (stderr, "epoll error\n");
	      close (events[i].data.fd);
	      continue;
	    }

	  else if (sfd == events[i].data.fd)
	    {
              /* We have a notification on the listening socket, which
                 means one or more incoming connections. */
              while (1)
                {
                  struct sockaddr in_addr;
                  socklen_t in_len;
                  int infd;
                  char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];

                  in_len = sizeof in_addr;
                  infd = accept (sfd, &in_addr, &in_len);
                  if (infd == -1)
                    {
                      if ((errno == EAGAIN) ||
                          (errno == EWOULDBLOCK))
                        {
                          /* We have processed all incoming
                             connections. */
                          break;
                        }
                      else
                        {
                          perror ("accept");
                          break;
                        }
                    }

                  s = getnameinfo (&in_addr, in_len,
                                   hbuf, sizeof hbuf,
                                   sbuf, sizeof sbuf,
                                   NI_NUMERICHOST | NI_NUMERICSERV);
                  if (s == 0)
                    {
                      printf("Accepted connection on descriptor %d "
                             "(host=%s, port=%s)\n", infd, hbuf, sbuf);
                    }

                  /* Make the incoming socket non-blocking and add it to the
                     list of fds to monitor. */
                  s = make_socket_non_blocking (infd);
                  if (s == -1)
                    abort ();

                  event.data.fd = infd;
                  event.events = EPOLLIN | EPOLLET;
                  s = epoll_ctl (efd, EPOLL_CTL_ADD, infd, &event);
                  if (s == -1)
                    {
                      perror ("epoll_ctl");
                      abort ();
                    }
                }
              continue;
            }
          else
            {
              /* We have data on the fd waiting to be read. Read and
                 display it. We must read whatever data is available
                 completely, as we are running in edge-triggered mode
                 and won't get a notification again for the same
                 data. */
              int done = 0;

              while (1)
                {
                  ssize_t count;
                  char buf[512];

                  count = read (events[i].data.fd, buf, sizeof buf);
                  if (count == -1)
                    {
                      /* If errno == EAGAIN, that means we have read all
                         data. So go back to the main loop. */
                      if (errno != EAGAIN)
                        {
                          perror ("read");
                          done = 1;
                        }
                      break;
                    }
                  else if (count == 0)
                    {
                      /* End of file. The remote has closed the
                         connection. */
                      done = 1;
                      break;
                    }

                  /* Write the buffer to standard output */
                  s = write (1, buf, count);
                  if (s == -1)
                    {
                      perror ("write");
                      abort ();
                    }
                }

              if (done)
                {
                  printf ("Closed connection on descriptor %d\n",
                          events[i].data.fd);

                  /* Closing the descriptor will make epoll remove it
                     from the set of descriptors which are monitored. */
                  close (events[i].data.fd);
                }
            }
        }
    }

  free (events);

  close (sfd);

  return EXIT_SUCCESS;
}

main() first calls create_and_bind() which sets up the socket. It then makes the socket non-blocking, and then callslisten(2). It then creates an epoll instance in efd, to which it adds the listening socketsfd to watch for input events in an edge-triggered mode.

The outer while loop is the main events loop. It calls epoll_wait(2), where the thread remains blocked waiting for events. When events are available,epoll_wait(2) returns the events in the events argument, which is a bunch ofepoll_event structures.

The epoll instance in efd is continuously updated in the event loop when we add new incoming connections to watch, and remove existing connections when they die.

When events are available, they can be of three types:

  • Errors: When an error condition occurs, or the event is not a notification about data available for reading, we simply close the associated descriptor. Closing the descriptor automatically removes it from the watched set of epoll instanceefd.
  • New connections: When the listening descriptor sfd is ready for reading, it means one or more new connections have arrived. While there are new connections,accept(2) the connections, print a message about it, make the incoming socket non-blocking and add it to the watched set of epoll instanceefd.
  • Client data: When data is available for reading on any of the client descriptors, we useread(2) to read the data in pieces of 512 bytes in an inner while loop. This is because we have to read all the data that is available now, as we won't get further events about it as the descriptor is watched in edge-triggered mode. The data which is read is written to stdout (fd=1) using write(2).  If read(2) returns 0, it means an EOF and we can close the client's connection. If -1 is returned, anderrno is set to EAGAIN, it means that all data for this event was read, and we can go back to the main loop.

That's that. It goes around and around in a loop, adding and removing descriptors in the watched set.

Download the epoll-example.c program.

Update1: Level and edge triggered definitions were erroneously reversed (though the code was correct). It was noticed by Reddit userbodski. The article has been corrected now. I should have proof-read it before posting. Apologies, and thank you for pointing out the mistake. :)

Update2: The code has been modified to run accept(2) until it says it would block, so that if multiple connections have arrived, we accept all of them.  It was noticed by Reddit usercpitchford. Thank you for the comments. :)

 

目录
相关文章
浅谈select,poll和epoll的区别
云栖号资讯:【点击查看更多行业资讯】在这里您可以找到不同行业的第一手的上云资讯,还在等什么,快来! select,poll和epoll其实都是操作系统中IO多路复用实现的方法。 select select方法本质其实就是维护了一个文件描述符(fd)数组,以此为基础,实现IO多路复用的功能。
浅谈select,poll和epoll的区别
|
1月前
|
SQL
oralce wait event
oralce wait event
11 1
|
4月前
|
前端开发
The following tasks did not complete: first Did you forget to signal async completion?
The following tasks did not complete: first Did you forget to signal async completion?
|
Linux
lvm[12446]: Another thread is handling an event. Waiting
在检查一Linux服务器时,发现日志里面有大量“lvm[12446]: Another thread is handling an event. Waiting...” Jul  4 00:01:42 localhost lvm[12446]: Another thread is handling an event.
1010 0
poll&&epoll实现分析(二)——epoll实现
Epoll实现分析——作者:lvyilong316 通过上一章分析,poll运行效率的两个瓶颈已经找出,现在的问题是怎么改进。首先,如果要监听1000个fd,每次poll都要把1000个fd 拷入内核,太不科学了,内核干嘛不自己保存已经拷入的fd呢?答对了,epoll就是自己保存拷入的fd,它的API就已经说明了这一点——不是 epoll_wait的时候才传入fd,而是通过epoll_ctl把所有fd传入内核再一起"wait",这就省掉了不必要的重复拷贝。
1004 0
|
Linux 调度 网络协议
poll&&epoll实现分析(一)——poll实现
0.等待队列 在Linux内核中等待队列有很多用途,可用于中断处理、进程同步及定时。我们在这里只说,进程经常必须等待某些事件的发生。等待队列实现了在事件上的条件等待: 希望等待特定事件的进程把自己放进合适的等待队列,并放弃控制全。
1016 0
|
存储 安全 C++
events
LeftValue 指向内存位置的表达式被称为左值rightValue 指的是存储在内存中某些地址位置的数值,右值可以出现在赋值号的右边 #include using namespace std;//单独语句,#define ruiy 10 //不是单独独立的语句所以不用分号,分号是常用编成语言...
906 0