Here I come with another contribution for the community.
Gitlab repo (yep you get source) [Only registered and activated users can see links. Click Here To Register...]
This is CSRO-R proxy/middleware & security layer built to support my most recently released files revision (Shiroi).
So the aim of this project... well first of all I needed something like this for my own project.
another reason was to completely get rid of that dirty ugly MGUARD proxy that I did release some time ago
(wasn't my work so I'm deeply sorry for sharing such a piece of junk).
On another side this is just something that you could be using as a perfect alternative to all those messy proxies you ever had out there.. (unless you paid a ton to some professional).
It is the perfect kick-off to make your own stuff from scratch.
This proxy will allow you to build your own backend features with ultimate ease as I've pre-built a user-friendly "simple as fuck" functional API around the "proxy" (see guide below).
Simple guide on how to setup your own custom packet (step1):
This is step 2:
OK so.. step3 - get the opcode hooked up with that logic now:
You can also setup multi agent / remote server redirect system for your liking and to clearly fit your server config without issues:
Key features:
- Fully written in ECMAScript 9 (Modern JavaScript) - super simple and flexible syntax;
- Super fast.. and everything's componential;
- Loads configuration from .env files and highly configurable
- Fully working proxy for gateway / agent / download
- Runs well on both windows and linux environments (also MAC - intel/amd only - m1 crashes)
- Basic ip2country / country blacklist / ip2proxy detection (note that this requires downloading & extracting [Only registered and activated users can see links. Click Here To Register...] in data directory;
- Perfectly supports remote server to server connections (e.g. proxy is on another side of world, what if you want a cluster/cloud of them? - that too, its NodeJS)
- More/less all of the CSRO-R client->remote packets whitelisted.
- Fixed party matching buffer overflow issue;
- Each client gets his own unique process and that creates the most flawless/smooth connection in game you'd ever see (almost like there's no proxy - you simply don't feel)
- Logs all client->remote packets (except ping) into separate log files per user_ip.
- Pluggable packet controllers for each module almost like in express;
- Can handle both way packets (remote/client) and dispatch multiple packets (to various directions) from single endpoint by just returning an sugar syntax javascript array (see controllers folder for examples on how to do that);
- You also get a auto-documented REST api (by swagger) around the entire CSRO-R database including the SR_ADDON_DB, please see src/models to understand how it works - you can just import the existing model files into the respective directories index.js file and it will automatically appear on your rest api endpoints as a CRUD service around that table:
Keep in mind: any of the API_* services should not be exposed to public as these are internal HTTP services built for the proxy servers to communicate with database (their ports must be closed in firewall)
To view auto-generated swagger docs: visit the IP:PORT/api_docs of each API_* prefixed service (see posrt info in console or config).
The most fun part
- Launch the servers:
- Check your configuration at src/config/AgentServer.js, src/config/GatewayServer.js, src/config/DownloadServer.js, src/config/API.js, src/config/Redirects.js
- Make sure all your ports / ips are right everywhere
- Check the README.md file for even more information / guidance on running this man.
- Production mode: just refer to the batch files at project root (you need them all for everything - these are "kind of" microservices that run on their own instances), close bat and start again it basically instantly - no wait time, no preloading.
- Open your ports for the GatewayServer, AgentServer and DownloadServer (btw download is not mandatory you can use the default one directly if you wish)
- development: See package.json for all the available scripts or refer to this idea: yarn dev:GatewayServer
- So this project is fully opensource and AGPL licensed, fully free and it is a gitlab repository - [Only registered and activated users can see links. Click Here To Register...]
- Due the fact this is javascript you will never find errors here.. javascript was originally a web based language but seems like it has evolved to level where it can outperform most of the proxies out there.
Some known issue(s)
- Due the fact that this is still in development the item mall (buy item) packet is not fully functioning (it works perfectly on my own server in production though with MGUARD behind it - without it you will lack some of packets, like title manager, not sure which else)
- Personally im using MGUARD database at the moment just to get access to its records, but this is planned to be removed and moved to own proxy db.
- You need to run yarn install:db to setup the default proxy tables - this will drop your data if there was any before (while tables existing).
NB it might be really worth visiting the src/lib/* files and finding out how this whole stuff actually works - it is not rocket science.
- In development mode the proxy will auto-restart for you to save your time.
- You can provide updates via gitlab's pull requests - keep your code clean and close to the standard i've provided in the initial source and it will be reviewed and considered for adding to the main branch.
HAVE FUN
Gitlab repo (yep you get source) [Only registered and activated users can see links. Click Here To Register...]
This is CSRO-R proxy/middleware & security layer built to support my most recently released files revision (Shiroi).
So the aim of this project... well first of all I needed something like this for my own project.
another reason was to completely get rid of that dirty ugly MGUARD proxy that I did release some time ago
(wasn't my work so I'm deeply sorry for sharing such a piece of junk).
On another side this is just something that you could be using as a perfect alternative to all those messy proxies you ever had out there.. (unless you paid a ton to some professional).
It is the perfect kick-off to make your own stuff from scratch.
This proxy will allow you to build your own backend features with ultimate ease as I've pre-built a user-friendly "simple as fuck" functional API around the "proxy" (see guide below).
Simple guide on how to setup your own custom packet (step1):
Code:
// To create a new packet handle you will need to follow these steps:
// - Open folder src/controllers
// - Create a file with your own unique name then .js
// - Now follow this pattern to create your own handle:
// In my case the handle will be named UniqueKilled and it will really do as it says - something when unique got killed
// this is an actually working feature that gives silk to users based on configuration provided in src/config/AgentServer.js
async function UniqueKilled( // this functions params will always have inserted some prototypes from the main process that you can utilize further:
{
logger, // logger logs the files per unique IP user instance (all chars from same ip in one log file)
config: {
UNIQUES, // we get some configuration variable defined in src/config/AgentServer.js
},
stream: { // silkroadsecurity api to create/read packets
reader,
writer,
},
memory, // Our thing handles session using
api: {
account, // this is SRO_R_ACCOUNT api (auto generated from sequelize models)
shard, // this is SRO_R_SHARD api
}
},
packet, // this is the packet itself that comes through the registered opcode in src/config/AgentServer.js
) {
try {
const current_charname = memory.get('CharName16'); //whoami (this got stored in the temporary instance memory since the moment of character selection - see SelectCharacter.js controller.
const read = new reader(packet.data); // initialize reading
if (read.uint8() == 6) { // read
const uniqueId = read.uint16(); //read
const arg1 = read.uint8(); //read again
const arg2 = read.uint16(); // and again..
const killer_name = read.string(); //read even more
if (killer_name == current_charname) {
const unique_config = UNIQUES[uniqueId] || false;
const UserJID = memory.get('UserJID');
const CharID = memory.get('CharID');
// Retrieve actual char information from the shard database
const {
data: [
{
CurLevel,
}
]
} = await shard.get(`/_char`, {
params: {
sort: JSON.stringify(['CharID']),
filter: JSON.stringify({
CharID,
}),
}
});
// drop a line of log for this event in the log file (to that specific user).
logger.log('info', 'killed_unique', {
UserJID,
killer_name,
CurLevel,
reward_config: unique_config,
unique_id: uniqueId,
});
// we may wanna dispatch this event to discord here later or even do something more , that can go here..
if (unique_config && CurLevel <= unique_config.cap) { // some basic logic
// Dispatch silk reward
await account.post(`/reward-silk`, { // add silks to the user
UserJID,
CharID,
amount: unique_config.reward,
});
// Send a notice to user after the unique kill:
const write = new writer(); // this is class from silkroad-security more here: https://github.com/EmirAzaiez/SilkroadSecurityJS
write.uint8(7); // write some bytes
write.string(`Great! You got ${unique_config.reward} silk for killing [${unique_config.name}]`); // write some message
return [ // you can return multiple packets to various directions
{ // packet 1:
packet,
target: 'client', // we can say 'remote' or 'client' -- too obvious: remote is the actual server, client = the guy playing your game.
},
{ // packet 2:
packet: {
opcode: 0x3026, // custom or an existing opcode to send
data: write.toData(),
},
target: 'client',
},
];
}
}
}
return [{ packet, target: 'client' }]; // this will jus trigger the if-not condition and return the packet to the client
} catch (e) {
return [{ packet }];
}
}
export default UniqueKilled; // here we export the controller from file
Code:
// open src/controllers/index.js and import that controller from above
// keep in mind this file is required to index the controller name in the the respective
// module config file (in my example we're using AgentServer and packet 0x300C)
import RedirectAgentServer from './RedirectAgentServer';
import SelectCharacter from './SelectCharacter';
import UpdateWeather from './UpdateWeather';
import SilkDisplay from './SilkDisplay';
import ItemMallToken from './ItemMallToken';
import ItemMallBuy from './ItemMallBuy';
import FixPartyMatching from './FixPartyMatching';
import UniqueKilled from './UniqueKilled'; //<---- this guy
export {
RedirectAgentServer,
SelectCharacter,
UpdateWeather,
SilkDisplay,
ItemMallToken,
ItemMallBuy,
FixPartyMatching,
UniqueKilled, // <--- we also export it
};
Code:
// src/config/AgentServer.js
// just about the line 107 you should find the object 'middlewares'
middlewares: {
client: { // client -> remote opcode handlers
0x7001: 'SelectCharacter',
0x3012: 'SilkDisplay',
0x7069: 'FixPartyMatching',
// 0x7565: 'ItemMallToken',
// 0x7034: 'ItemMallBuy',
},
remote: { // remote -> client opcode handlers
0x3809: 'UpdateWeather',
0x300C: 'UniqueKilled', //<-- just add this line and we're done, hit save
},
},
// remote stands for the agent/gateway or downloadserver
// client is the game itself (way too obvious)
Code:
import dotenv from 'dotenv';
dotenv.config();
export default {
// AgentServer
':12': { // <-- IP AGENT1 FROM
host: process.env.REDIRECT_AGENT_IP || '138.201.58.79', // <-- IP TO
port: process.env.REDIRECT_AGENT_PORT || 8002, // <-- PORT TO
},
':13': { // <-- IP AGENT2 FROM
host: process.env.REDIRECT_AGENT2_IP || '138.201.58.79', // <-- IP TO
port: process.env.REDIRECT_AGENT2_PORT || 8002, // <-- PORT TO
},
':15': { // <-- IP AGENT3 FROM
host: process.env.REDIRECT_AGENT3_IP || '138.201.58.79', // <-- IP TO
port: process.env.REDIRECT_AGENT3_PORT || 8002, // <-- PORT TO
},
/// you can keep using the above pattern to create basically unlimited (good question) amount of agent redirects see file RedirectAgentServer.js for more info
// DownloadServer -- same thing as agent but for the gateway -> download
// '148.251.195.215:16002': { // <-- IP FROM
// host: process.env.REDIRECT_DOWNLOAD_IP || '127.0.0.1', // <-- IP TO
// port: process.env.REDIRECT_DOWNLOAD_PORT || 8003, // <-- PORT TO
// },
};
- Fully written in ECMAScript 9 (Modern JavaScript) - super simple and flexible syntax;
- Super fast.. and everything's componential;
- Loads configuration from .env files and highly configurable
- Fully working proxy for gateway / agent / download
- Runs well on both windows and linux environments (also MAC - intel/amd only - m1 crashes)
- Basic ip2country / country blacklist / ip2proxy detection (note that this requires downloading & extracting [Only registered and activated users can see links. Click Here To Register...] in data directory;
- Perfectly supports remote server to server connections (e.g. proxy is on another side of world, what if you want a cluster/cloud of them? - that too, its NodeJS)
- More/less all of the CSRO-R client->remote packets whitelisted.
- Fixed party matching buffer overflow issue;
- Each client gets his own unique process and that creates the most flawless/smooth connection in game you'd ever see (almost like there's no proxy - you simply don't feel)
- Logs all client->remote packets (except ping) into separate log files per user_ip.
- Pluggable packet controllers for each module almost like in express;
- Can handle both way packets (remote/client) and dispatch multiple packets (to various directions) from single endpoint by just returning an sugar syntax javascript array (see controllers folder for examples on how to do that);
- You also get a auto-documented REST api (by swagger) around the entire CSRO-R database including the SR_ADDON_DB, please see src/models to understand how it works - you can just import the existing model files into the respective directories index.js file and it will automatically appear on your rest api endpoints as a CRUD service around that table:
Code:
// Lets say we want a table
// open src/models/account/index.js and src/models/account/TB_User.js
// the model (basically define the columns and their types in respective table):
export default (db, types) => db.define('TB_User', {
JID: {
type: types.INTEGER,
allowNull: false,
primaryKey: true
},
StrUserID: {
type: types.STRING(25),
allowNull: false
},
password: {
type: types.STRING(50),
allowNull: false
},
Status: {
type: types.TINYINT,
allowNull: true
},
GMrank: {
type: types.TINYINT,
allowNull: true
},
Name: {
type: types.STRING(25),
allowNull: true
},
Email: {
type: types.STRING(50),
allowNull: true
},
sex: {
type: types.CHAR(2),
allowNull: true
},
certificate_num: {
type: types.STRING(30),
allowNull: true
},
address: {
type: types.STRING(100),
allowNull: true
},
postcode: {
type: types.STRING(10),
allowNull: true
},
phone: {
type: types.STRING(20),
allowNull: true
},
mobile: {
type: types.STRING(20),
allowNull: true
},
regtime: {
type: types.DATE,
allowNull: true
},
reg_ip: {
type: types.STRING(25),
allowNull: true
},
Time_log: {
type: types.DATE,
allowNull: true
},
freetime: {
type: types.INTEGER,
allowNull: true
},
sec_primary: {
type: types.TINYINT,
allowNull: true
},
sec_content: {
type: types.TINYINT,
allowNull: true
},
AccPlayTime: {
type: types.INTEGER,
allowNull: true
},
LatestUpdateTime_ToPlayTime: {
type: types.INTEGER,
allowNull: true
},
TotalLoggedOutTime: {
type: types.BIGINT,
allowNull: true
},
LastLoggedOuttime: {
type: types.DATE,
allowNull: true
},
EmailValidate: {
type: types.TINYINT,
allowNull: true
},
NickName: {
type: types.STRING(50),
allowNull: true
}
}, {
sequelize: db,
tableName: 'TB_User',
schema: 'dbo',
hasTrigger: true,
timestamps: false
});
// see how these two files are linked together:
import _Notice from './_Notice';
import TB_User from './TB_User'; // <-- the sequelize model we want
import SK_Silk from './SK_Silk';
export {
_Notice,
TB_User, // <-- just export it and it will appear on API_Account service as a route {HOSTNAME}:{PORT}/tb_user and will support POST, GET, DELETE, PUT methods
SK_Silk,
}
To view auto-generated swagger docs: visit the IP:PORT/api_docs of each API_* prefixed service (see posrt info in console or config).
The most fun part
- Launch the servers:
- Check your configuration at src/config/AgentServer.js, src/config/GatewayServer.js, src/config/DownloadServer.js, src/config/API.js, src/config/Redirects.js
- Make sure all your ports / ips are right everywhere
- Check the README.md file for even more information / guidance on running this man.
- Production mode: just refer to the batch files at project root (you need them all for everything - these are "kind of" microservices that run on their own instances), close bat and start again it basically instantly - no wait time, no preloading.
- Open your ports for the GatewayServer, AgentServer and DownloadServer (btw download is not mandatory you can use the default one directly if you wish)
- development: See package.json for all the available scripts or refer to this idea: yarn dev:GatewayServer
- So this project is fully opensource and AGPL licensed, fully free and it is a gitlab repository - [Only registered and activated users can see links. Click Here To Register...]
- Due the fact this is javascript you will never find errors here.. javascript was originally a web based language but seems like it has evolved to level where it can outperform most of the proxies out there.
Some known issue(s)
- Due the fact that this is still in development the item mall (buy item) packet is not fully functioning (it works perfectly on my own server in production though with MGUARD behind it - without it you will lack some of packets, like title manager, not sure which else)
- Personally im using MGUARD database at the moment just to get access to its records, but this is planned to be removed and moved to own proxy db.
- You need to run yarn install:db to setup the default proxy tables - this will drop your data if there was any before (while tables existing).
NB it might be really worth visiting the src/lib/* files and finding out how this whole stuff actually works - it is not rocket science.
- In development mode the proxy will auto-restart for you to save your time.
- You can provide updates via gitlab's pull requests - keep your code clean and close to the standard i've provided in the initial source and it will be reviewed and considered for adding to the main branch.
HAVE FUN