Guest Wifi Setup DD-WRT Tutorial

This is a tutorial for setting up a DD-WRT router to have a separate isolated (virtual) guest wifi access point. At the end of the tutorial you will have two wifi networks, one private, and one public for the guests. Guests will not have access to the router or anything else on your network.

  1. Your build must be ≥ 23020. There should be a network already running and you can access the internet.0 network basic setup
  2. Create the virtual access point for your guests.
    1. Go to Wireless -> Basic Setup
    2. Click Add in Virtual Interfaces. Fill out the details as in the image.1 wireless settings
  3. Optional: Go to Wireless -> Wireless Security, choose your encryption for the guest wifi.
  4. Now enable DHCPD for the guest wifi so IP addresses can be assigned.
    1. Go to Setup->Networking and add another dhcp server for the guest network as shown.2 networking
  5. Optional: Setup Quality of service (QoS) to limit guest network bandwidth.
    1. Configure QoS as shown.3 qos
    2. Results from my speed test.4 speed test
  6. Restart the router. Important because I found sometimes changing settings they take a while to activate and you’re not sure if it has had any effect.

 

Links I found useful.

 

Guacamole Add User

Adding a user

https://gist.github.com/sunapi386/9dc6eb841f1454733e02

Create a user

Then create the user; remember setting to his password to “chessman123”

sudo adduser binsun

Modifying guacamole’s user login data

Edit the user mapping, unless you’re using a database to store user logins.

sudo vi /etc/guacamole/user-mapping.xml

Create the entry like below, one for vnc and one for ssh.

<!-- User for binsun -->
<authorize username="binsun" password="chessman123">
    <!-- First authorized connection -->
    <connection name="vnc">
        <protocol>vnc</protocol>
        <param name="hostname">localhost</param>
        <param name="port">5904</param>     <!-- Edit this -->
        <param name="password">qwe123</param> <!-- Password for vncserver -->
        <param name="encodings">zrle ultra copyrect hextile zlib corre rre raw</param>
    </connection>
    <!-- Second authorized connection -->
    <connection name="ssh">
        <protocol>ssh</protocol>
        <param name="hostname">localhost</param>
        <param name="port">22</param>
        <param name="username">binsun</param> <!-- Edit this -->
        <param name="password">chessman123</param> <!-- Edit this -->
    </connection>
</authorize>

SSH Service

I assume the SSH service is already running on port 22, so that should already work.

Relaunching service

We just need to start the VNC service listening in on port 5903.

sudo /etc/init.d/tomcat7 restart

Restart hosting service so it loads the updated user-mapping.xml.

Logging

If logging in fails, check the log for login attempts.

tail -f /var/log/tomcat7/catalina.out

Setting up VNC

In the newly created user’s home directory, create a file called xstartup. This file is a script that gets run when the vncserver starts. Guacamole starts the vncservice.

First we switch to that user.

su binsun

Then create the startup script

mkdir ~/.vnc
chmod 700 ~/.vnc
cd ~/.vnc
vi xstartup

Put these in.

#!/bin/sh
xrdb $HOME/.Xresources
xsetroot -solid grey
#x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
#x-window-manager &
# Fix to make GNOME work
export XKL_XMODMAP_DISABLE=1
#/etc/X11/Xsession
startxfce4

And make this excutable.

chmod +x xstartup

While we’re still logged in as the user, start the vncserver

vncserver :4

This creates the service listening on 5904 port. You should be able to see it.

nmap -Pn localhost

Starting Nmap 6.40 ( http://nmap.org ) at 2015-10-19 21:40 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 988 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
25/tcp   open  smtp
631/tcp  open  ipp
3306/tcp open  mysql
5902/tcp open  vnc-2
5903/tcp open  vnc-3
5904/tcp open  unknown <------- this is it
6002/tcp open  X11:2
6003/tcp open  X11:3
6004/tcp open  X11:4
6005/tcp open  X11:5
8080/tcp open  http-proxy

If you can’t get vnc started, look at this log.

tail -f /var/log/syslog

Customer Support Platform

Customer Support Platform

October 1, 2015

Increase customer support efficiency by using preformed answers and optionally modifying it before replying to customers.

Goal

Build an API as demo to investors, about 3 weeks away. Basically a customer hands over their customer support chat logs, we provide back query-responses through API. The (reiterated) version of the problem is: retrieve relevant responses (from previously seen responses) based on customer question.

  • Treat customer question as a query.
  • Retrieve a reasonable response.
  • The meat of the problem lies in creating a good mapping from query to a response.

Due to the timely nature of building a demo in short time, I look to using pre-existing tools rather than develop an entire process from scratch. Obviously it’s hard to publish any papers on using existent techiques, but our goal constraint involves more engineering than research.

Pre-existing tools approach

Apache Lucene

Apache Lucene, arguably the most advanced, high-performance, and fully featured search engine library in existence today—both open source and proprietary. But since it is a library only, it would be difficult to get started – you’d need to build around the library. This is the search engine library used behind Wikipedia, Guardian, Stack Overflow, Github, Akamai, Netflix, LinkedIn.

  • Lucene has pluggable relevance ranking models (NLP information extraction and sentiment analysis) are built in, including the the Vector Space Model and Okapi BM25.
  • The power of Lucene is text searching/analyzing. It’s very fast because all data in every field is indexed by default. Text searching focused applications should definitely use Lucene.

There are two predominant platforms built on top of Lucene. Apache Solr, and Elasticsearch. These two are built and designed for full text search on top of Lucene. Both are open source.

ElasticSearch is friendlier to teams which are used to REST APIs, JSON etc and don’t have a Java background, so we’ll run with that.

Elasticsearch

Elasticsearch is also written in Java and uses Lucene internally but makes full-text search easy by hiding the complexities of Lucene behind a simple, coherent, RESTful API.

  • Also pluggable ranking models! This is important to try different approaches to getting good customer results. The modularity of this means we can build one pipeline, and improve our response by using different ranking models.
  • Can be plugged with our own custom ranking functions. For instance, we might care about
    • Information decay, where more recent responses snippet at the top.
    • Ranking based on uses and non-uses of a response snippet.
  • Customer’s questions treated as query input, and support agent’s responses treated as snippets to look up.
  • References
Searching
  • Relevance: Elasticsearch’s main advantage over a traditional database is full-text search. Search results are sorted by their relevance score. The concept of relevance is completely foreign to traditional databases, in which a record either matches or it doesn’t. See Full Text Searching.
  • Phrase Search: Sometimes we want to match exact sequences of words, phrases. Use the match_phrase query in Phrase Search.
  • Highlighting: Although not super important, we can highlight the snippet that matched our search. Highlighting.

Ranking Models

Using a good ranking model is the meat of the problem. Famous ranking models:

  • TF-IDF What is TF-IDF? The 10 minute guide Wikipedia TF-IDF
  • BM25 is regarded slightly better in our case than TF-IDF.
    • Quote from Similarity in Elasticsearch: There is a reason why TF-IDF is as widespread as it is. It is conceptually easy to understand and implement while also performing pretty well. That said, there are other, strong candidates. Typically, they offer more tuning flexibility. In this article we have delved into one of them, BM25. In general, it is known to perform just as good or even better than TF-IDF, especially on collections with short documents.
  • Consider taking Coursera on NLP, learn more about ranking models.

These two above are considered statistical analysis. In recent years, fundamental break-throughs were archieved using machine learning, specifically with neural architectures, in several subfields of AI – computer vision, speech recognition, machine translation. Consequently, more advanced ranking models could be derived from approaches in neural networks.

Training Data

Evaluating any prediction or recommendation engine relies on having a good set of data. The Ubuntu Dialogue Corpus is one such dialogue dataset.

Ubuntu Dialogue Corpus

The Ubuntu Dialogue Corpus, introduced by this paper, contains almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. Along with introduction of the dialogue corpus, the paper also discusses learning architectures suitable for analyzing this dataset.

Specifically, the following architectures are benchmarked for performance:

  • Term Frequency-Inverse Document Frequency (TF-IDF, which is what is used by the Elasticsearch/Lucene engine)
  • Recurrent Neural Network (RNN)
  • Long Short-Term Memory (LSTM) architecture

Performance evaluation is based on the task of best response selection, without human labels. The agent is asked to select the k most likely responses, and it is correct if the true response is among the k candidates. The family of metrics used in language tasks is called Recall@k. For example, k = 1 is denoted as R@1.

The observed result is that the LSTM outperforms both RNN and TF-IDF on all evaluation metrics.

Daerli Chinese Conversation Log

A confidential corpus of support dialogues are to be used in our testing, as customers involved are Chinese companies.