Other Meta SFXD Stuff
- Why Discord ?
- THREE YEARS OLD
- [COVID] - SFXD Community Initiatives
- The state of SFXD, Year 5
- April Fools '23 - OpenAI-powered bad answers Chatbot
Why Discord ?
I can give some insight here.
In short I chose Discord at the time because it was a more "community-focused" platform and easier to set up for web-based communities.
In particular, what led me to Discord was: - the invite links, easily shareable Slack was still invite-only or with custom dev to do an invite page Discord just allowed me to post links wherever and people could join without even creating an account, which is great for one-off questions - the Roles for easily tagging people The lack of roles in slack really turns me off. In SFXD we have "normal" roles where you're grouped together in that role for brownie points, but we also have a rather sizable amount of "taggable" roles which allow you to ask a question and then tag the people that requested these roles so they can help. It's a nice way of limiting who gets pinged as you need to request it. - the concept of chat channels which are all "open" unless you don't have access to them It just makes channels more active by default. In Slack you need to explore the channels, join whatever. In Discord all channels you have permission to see are visible, which enourages more discussion.
I won't lie, the addition of the game store a yea or so back hurt us as far as IT blacklists go. I am still hoping discord releases something like Discord Prowith no gamestore that we could use or something.
But in the meantime as a community-focused hub Discord was just better, even if it's less prevalent - and despite its downsides I still think it was the right choice for us.
As far as federating Slack & Discord we'de need some sort of webhook that leads one into the other, I mean why not but that seems very dev heavy for not a lot of gain.
THREE YEARS OLD
Dear SFXD Members that are @here,
it is our greatest pride, our glory, hell the apotheosis of glee, to inform you all:
SFXD IS NOW THREE YEARS OLD.
For those who are new joiners, SFXD's first post was one of my now-ex colleagues responding "OK" in chat because she didn't know the platform - she then promptly lost her Discord account and never rejoined (imagine missing out on the Walter memes).
Up until something like January 2017 we were... Maybe 100 accounts, most of which were temporary joiners (the source of our automatic pruning of inactive accounts), with maybe 10 recurring posters.
We now have:
- a website https://sfxd.github.io/
- a wiki https://wiki.sfxd.org/
- a Linkedin page where people actually put themselves as Community Member in their job description, much to our pride: https://www.linkedin.com/company/sfxd/
- the most upvoted Reddit posts of /r/Salesforce with the Abridged Release Notes
- ~1750 members that survive pruning on a regular basis - meaning people that are either active, or have requested a role and stayed.
All meme'ing aside we are really, really proud that SFXD has thrived so much - and we owe it to all of you.
So to celebrate, we are changing nothing.
No changes to our commitment to this community.
No changes to our no-ads policy on websites.
No changes to our no-money and no-donations policies.
We won't change anything because SFXD is all of you, and we want to keep growing with you all.
SFXD, still a community-first Salesforce Discussion Group. Come for the Salesforce, stay for the memes ;)
With love and a million "thank you"s,
@Moderators @Specialty Mod - Dev @Specialty Mod - General
[COVID] - SFXD Community Initiatives
Hey @everyone,
what with the coronavirus situation, there's a lot of people working remotely that aren't used to it, which can be a bit isolating.
We'd like to do our part in helping the Salesforce community through this, and we thought we'd try to be even more awesome than usual.
(If you're not a member of SFXD and are reading this somewhere else than Discord you can join us via https://join.sfxd.org :) )
Current plan:
-
- we've opened the #quarantine-coffee-room, a voice channel to come and chill.
Us mods will try to be present at least an hour or so a day in it. You can come in and chat about your day, or ask about Salesforce, or anything really - consider it your coffee break. - once a week, we will host SFXD office hours in the #office-hours voice channel.
When office hours are in session, one or more Mods will be present in that channel, and the channel will be moved to the `General` category for visibility.
Office hours are a time where we'll be available exclusively to answer Salesforce questions - format is somewhat free at the moment, but please jump in if you have questions.
We will announce Office hours 24h ahead of time so people can prepare an agenda. Please post your questions in #office-hours in the `Meta` category so we can find them easily: "What can I do to optimize queries in large data volume environments ?" - I will try to host some consulting content once a week in #quarantine-coffee-room, starting next week. Things that I'm currently working on:
- a workshop case study, with a real client that'll join the channel - perfect for people who want to see how those are done;
- an SFXD-ready version of my company-famous Salesforce Consulting 101 presentation;
- a "what now" bootcamp for the post-corona strategies for clients and end-users.
- If anyone wants to host Admin, Developer or Consultant sessions or other such content using the Discord server as a host please approach us and we'll make it happen
- we've opened the #quarantine-coffee-room, a voice channel to come and chill.
Stay safe <3
Your SFXD Mod Team
The state of SFXD, Year 5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| SFXD and the Mod Team PRESENT |
| ~~ |
| The state of SFXD, Year 5 |
| |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INTRODUCTION
Hey everyone,
in addition to the yearly survey, which gives insights into what you think about SFXD, we thought we’d do a mod post about what we think about SFXD this year. I’ll try to keep these running yearly.
The idea behind this is to:
- be transparent about mod actions
- showcase statistics that we have about the Server (we don’t have many seeing as we hate tracking)
- give you a bit of a behinds-the-scenes view of how we run SFXD.
We’ll try to keep things short, but this will definitely take up a few screens worth of messages (BEAR WITH ME) :bear:.
So without further ado:
SERVER STATS
Actual Server Stats
4582 Members, of which:
- 1530 role-less Members Inactive for 7 Days
- 2256 Members Inactive for 7 Days
- 1036 role-less Members Inactive for 30 Days
- 1555 Members Inactive for 30 Days
Which results in a “real” membership of roughly 2600 members active every month (roughly 1/3rd of monthly inactives check the server once every few months).
With:
- 785 average weekly visitors
- 236 Weekly communicators / 395 Monthly communicators
- 95 message per communicator
- 110 new members weekly on average, of which some 15 stay active in the server
- Roughly 33% growth every year since 4 years.
- 12500 messages sent weekly on average
- 2500 messages sent daily on average
- 60 minutes of voice monthly on average
21 servers are following the #announcements channels and receive what we write here.
Meta Server Stats
- 92 roles
- 50 text channels (including archived and mod)
- 21 people have been assigned a Salesforce Internal role
- 1 Vendor still active (hi @gearset)
- 12 live sessions and office hours done last year (more to come!)
Wiki Stats
The Wiki runs on a glorious 1vCPU, 1GB of RAM droplet in DigitalOcean. Its entire footprint is roughly 20MB, and most articles are a few kilobytes - the storage comes from linked documents.
The wiki sees some 2000 visits a month on average, with people staying 14 minutes on a single page.
The wiki serves on average 300MB of data per month, mostly to the US.
Implementation Guides stats
The implementation guides repo runs on a glorious 1vCPU, 1GB of RAM droplet in DigitalOcean. Its entire footprint is some 450MB due to the implementation guide PDFs.
The implementation guides repo sees some 9500 visits a month on average, with people staying 6 minutes on a single page.
The implementation guides repo serves on average 1TB of data per month 950GB of which is cached, mostly to the US with India being close behind.
We don’t really have other stats because all of them come from either cloudflare or a selfhosted non-invasive stats module.
Total server costs are roughly $12/month, due to some droplet resizes I gotta do when the ARN get released - otherwise the traffic buckles the server.
Implementation Guides stats
The implementation guides repo runs on a glorious 1vCPU, 1GB of RAM droplet in DigitalOcean. Its entire footprint is some 450MB due to the implementation guide PDFs.
The implementation guides repo sees some 9500 visits a month on average, with people staying 6 minutes on a single page.
The implementation guides repo serves on average 1TB of data per month 950GB of which is cached, mostly to the US with India being close behind.
We don’t really have other stats because all of them come from either cloudflare or a selfhosted non-invasive stats module.
Total server costs are roughly $12/month, due to some droplet resizes I gotta do when the ARN get released - otherwise the traffic buckles the server.
MODERATION
Hard Numbers
- SFXD Mods have banned 63 members all-time, of which 57 were spam accounts during the spam-wave this year, which were then deleted by Discord.
- SFXD Mods have kicked 3 members this year, all of which had 2+ warnings.
- SFXD Mods have emitted 12 manual warns this year, most of which were for spamming content or approaching members despite being asked not to.
- SFXD Mods have been involved in 2 individual chats with specific members for de-escalation of various situations.
- Enforcer has emitted 22 automatic warns this year, most of which were about compliance with #job-offers or #job-requests, the rest being
@everyone
tags.
Note that reporting on mod actions is somewhat more complex than previous years due to Tatsu being yeeted into the sun, and replaced by our beloved @Enforcer.
General Mod stance
This hasn’t changed in forever - we moderate based on the rules in #basic-info, and if those do not match with our intent, we modify those publicly and post in #announcements regarding this.
The community is generally self-moderating. You are all great people, and it makes it easy to keep a handle on the server.
As a direct result we tend to be relatively hands-off re: moderation, apart from enforcing the heavily-moderated channels and banned topics.
The only actions that have not been taken in direct accordance with #basic-info were the de-escalation talks. While going into detail about whom and why is not going to happen, either here or in public channels, the short version of both cases was a member was exhibiting behavior we found unfit for SFXD. In both cases, we approached the member about why we felt this way, and how we would like to see change. One such member left SFXD; the other stayed.
Specific Mod actions
The most “agressive” mod is myself, with some 90% of all manual enforcement actions being done by me.
The most “talkative” mod is Supergrape for last year, closely followed by Mekel (a few hundred message’s worth, based on Enforcer points).
Tsalb continues to be the master of the Development sections, with all mod actions in those channels being done by him.
Mike oversees the ISV channels, with no mod actions taken to date.
That’s all :slight_smile:
You can look forward to the Survey results soon as well :slight_smile:
April Fools '23 - OpenAI-powered bad answers Chatbot
Introduction
April Fools' pranks are an iconic part of the Internet, ranging from fake articles to site-breaking jokes.
Two years ago we did a pretty huge prank where we rebranded the entire Server to Microsoft Dynamics and had the bot answer random stuff in chat whenever Salesforce terms were said. It was chaos, and roughly 300 people left the server at the time out of pure confusion, thinking Microsoft had put them in a new Discord because they're acquired it. We considered the operation a huge success. :)
Last year, we wanted to do an NFT joke but the setup was too annoying so we didn't do it.
This year... I decided to use the current buzzword, AI, to get us to something nice. This post is about what I did, how, and what it shows about the tech.
Pre requisites
As a basis, we had Holly.
Holly's a honeypot. She is a Discord User that I created for the specific purpose of catching Users that break rules regarding harassment, DMs, and other such behavior.
She's caught roughly 20 people since being put online, which isn't high but still too much.
She runs on very outdated tech because of her nature - Discord doesn't really allow Selfbots anymore, so I have to use deprecated versions of `discord.js`, which also means deprecated versions of `Node.js`.
As such, she has no incoming ports, no way of connecting to her server apart from physically being there, because well the entire thing's a giant security risk.
On the other side, we have OpenAI. They have a nice API. It's well documented and easy enough to handle.
And finally, we have the current buzz around ChatGPT.
So the concept we came up with was:
- have Holly listen to messages, seeing if any message had a question
- if it does, throw it at OpenAI using a custom prompt
- return the answer.
OpenAI, Chatbots, and Jokes
Normally, asking OpenAI stuff results in it giving proper answers (or at least it tries) in a professional way.
To use it in a manner suitable for April Fools, we needed it to answer with jokes, bad information, and ideally in a manner that didn't sound professional, to ensure people wouldn't really trust it.
The first step was then to create a Prompt that would be sent to the bot, which "colors" the answer given, and add the "real" question at the end. A nice example of this is here: you simply instruct the model in what you expect to see, and then let it do whatever.
We chose to add a super-long prompt, mostly to feed it more parameters it could use to add randomness.
The Prompt
This would be unsuitable for any production use as it results in any exchange costing roughly $0.01 per response. But it was for a joke, and a timed one at that - so I put a hard cutoff at $30 and called it a day.
Integrating OpenAI using OpenAI
OpenAI is good at writing code, they said. So I thought hey let's just use it to generate code!
Now obviously I could have written a very specific prompt, defining every piece of the code, but if I was going to do that I could have written it myself.
So I decided to tell it to add stuff to Holly's internal code.
The code prompt
Let's speak about integrating Discord and ChatGPT using NodeJs.
here is a nodejs script.
require('dotenv').config({ path: process.argv[2] });
const { Client, MessageCollector } = require('discord.js');
const bot = new Client({ disableEveryone: true });
let active_collectors = [];
let guild;
bot.on('ready', async () => {
console.log(`Logged in as: ${bot.user.tag}!`);
guild = await bot.guilds.get(process.env.GUILD);
});
bot.on('message', async (message) => {
/*Do Stuff to ban Users that break Rules*/
});
bot.login(process.env.TOKEN);
Add an integration that listens to any message in teh guild that has a question mark, and asks ChatGPT 3.5 turbo to answer based on a prompt to define.
It answered a piece of code, the four first lines of which are the interesting ones:
require('dotenv').config({ path: process.argv[2] });
const { Client, MessageCollector } = require('discord.js');
const bot = new Client({ disableEveryone: true });
const { GPT } = require('ai-dungeon-sdk');
... now obviously:
- I didn't want to use an entire library just to do one callout
- I didn't want to use AI Dungeon's SDK because it's optimized for creative writing.
There's other problems in the code, but that in itself was an issue, so I decided to tell it to rewrite it:
depending on AI dungeon seems like a hassle. can you write a piece of code that's standalone and doesn't make me include another library ?
... and here we start getting into where OpenAI is good and where it sucks.
It answered the following
First real answer
require('dotenv').config({ path: process.argv[2] });
const { Client, MessageCollector } = require('discord.js');
const bot = new Client({ disableEveryone: true });
const https = require('https');
function generateChatGPTResponse(prompt) {
return new Promise((resolve, reject) => {
const options = {
hostname: 'api.openai.com',
path: '/v1/engine/text-davinci-002/completions',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.GPT_API_KEY}`,
},
};
const postData = JSON.stringify({
'prompt': prompt,
'temperature': 0.5,
'max_tokens': 100,
'stop': '\n',
});
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
const response = JSON.parse(data);
resolve(response.choices[0].text);
});
});
req.on('error', (error) => {
reject(error);
});
req.write(postData);
req.end();
});
}
let active_collectors = [];
let guild;
bot.on('ready', async () => {
console.log(`Logged in as: ${bot.user.tag}!`);
guild = await bot.guilds.get(process.env.GUILD);
});
bot.on('message', async (message) => {
if (message.author.bot) return;
const filter = (m) => m.author.id === message.author.id;
if (message.content.includes('?')) {
const prompt = message.content.replace(/\?/g, '');
const response = await generateChatGPTResponse(prompt);
message.reply(response);
} else {
/*Do Stuff to ban Users that break Rules*/
}
});
bot.login(process.env.TOKEN);
This was also problematic in (amongst others) that:
- by default, it used an outdated and VERY DARN EXPENSIVE model (da-vinci-002)
- it randomly added a STOP parameter which prevents newlines
- the code would error on run (i'll come back to that later)
- but worst: the URL for the API is completely and utterly wrong as in the URL doesn't even exist.
Telling it to change the URL yielded more code that would neverwork, leading to a change in approach on my part.
OP: alright, let's start over then because it seems we've been led astray.ChatGPT:Sure, what would you like to achieve with the integration between Discord and ChatGPT using Node.js?
And I started over using the following:
New Prompt
Here is my code.
>> same code as before, removed for visibility
I want you to extend this code. What I want is that any time it reads a message in the guild, in a specific channel of id "246568944213819393", which contains a question mark, it answers using the gpt-3.5-turbo Chat Model. This is using the Chat endpoint.
It should have a temperature of 1, a max length of 500, Top P of 0.8, Frequency Penalty of 1, Presence Penalty of 0.
It should use the API key sk-
It should use the following example as a prompt:
"Yo-ho and a bottle of rhum"
This led to a better response with its own issues. You'll find the code below.
In short the code:
- Ignored the original code I passed
- Wouldn't run because async/wait weren't used properly
- Passed the Prompt as a String, where it's better to send it as an array according to the documentation
- used node-fetch which I hadn't included - I had to pull a (also outdated due to my codebase) version from NPM (not its fault, I didn't write that in the prompt)
- told me to add stuff which wasn't needed in currently working parts of the code
GPT Code v2
require('dotenv').config({ path: process.argv[2] });
const { Client } = require('discord.js');
const fetch = require('node-fetch');
const bot = new Client({ disableEveryone: true });
const API_KEY = process.env.API_KEY;
const GUILD_ID = process.env.GUILD_ID;
const CHANNEL_ID = '246568944213819393';
const PROMPT = `"Yo-Ho and a bottle of rhum"`;
const generateResponse = async (message) => {
const question = message.content.trim();
const endpoint = 'https://api.openai.com/v1/chat';
const data = {
prompt: PROMPT + '\n\nQ: ' + question + '\nA:',
temperature: 1,
max_tokens: 500,
top_p: 0.8,
frequency_penalty: 1,
presence_penalty: 0,
};
const response = await fetch(endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify(data),
});
const { choices } = await response.json();
const answer = choices[0].text.trim();
message.channel.send(answer);
};
bot.on('ready', () => {
console.log(`Logged in as: ${bot.user.tag}!`);
});
bot.on('message', async (message) => {
if (message.author.bot || message.channel.type !== 'text') {
return;
}
if (message.channel.id === CHANNEL_ID && message.content.includes('?')) {
generateResponse(message);
}
});
bot.login(process.env.DISCORD_TOKEN);
It added that
You must also assign the guildvariable inside the bot.on('ready', async () => {function, like this:
bot.on('ready', async () => {
console.log(`Logged in as: ${bot.user.tag}!`);
guild = await bot.guilds.fetch(process.env.GUILD);
});
I could continue step by step but in short I had to:
- tell it to store the prompt in arrays
- tell it to use the variables it was creating within the functions that needed said variables
- tell it to only use await within the async context
- tell it to remove parameters it added randomly one time, which aren't used in the Chat endpoint at all
- and a few others.
OpenAI also continuously incorrectly generated feedback such as:
OpenAI wrong feedback
-
message
is not defined outside of thebot.on('message', async (message) => {
function. Therefore,question = message.content.trim()
outside of this function will not work. -
The
generateResponse
function is not defined within the scope of thebot.on('message', async (message) => {
function. Therefore, it cannot be called from within this function. -
The
prompt
variable contains a set of messages that the user is supposed to reply to, but the bot does not collect the user's responses to these messages. Instead, the bot responds to any message that includes a question mark in the specified channel. -
The
guild
variable is undefined whenbot.on('message', async (message) => {
is first called. Therefore,guild.channels.get()
will not work.guild
should be assigned after the bot has successfully logged in usingbot.on('ready', async () => {
instead.
None of which are true.
In the end though, the code I generated with OpenAI worked... Except I never got an answer, instead getting an runtime error. Turns out I trusted the bot to know how the response would be returned, except that
return data.choices[0].text.trim();
is UTTERLY WRONG. as per documentation, it should be:
return data.choices[0].message.content;
But in the end the code functioned.
The end result
require('dotenv').config({ path: process.argv[2] });
const { Client, MessageCollector } = require('discord.js');
const fetch = require('node-fetch');
const https = require('https');
const bot = new Client({ disableEveryone: true });
const API_KEY = process.env.OPENAPI_KEY;
const CHANNEL_ID = process.env.QUESTION_CHANNEL;
const model = 'gpt-3.5-turbo';
const temperature = 0.7;
const maxTokens = 100;
async function generateResponse(messages, model, temperature, maxTokens) {
const agent = new https.Agent({ rejectUnauthorized: false });
const endpoint = 'https://api.openai.com/v1/chat/completions';
const headers = {
'Content-Type': 'application/json',
Authorization: `Bearer ${API_KEY}`,
};
const requestBody = {
messages: messages,
model: model,
temperature: temperature,
max_tokens: maxTokens,
top_p: 0.8,
max_tokens: 500,
presence_penalty: 0,
frequency_penalty: 1,
user:"holly"
};
const response = await fetch(endpoint, {
method: 'POST',
headers: headers,
body: JSON.stringify(requestBody),
agent: agent
});
const data = await response.json();
console.log(data);
return data.choices[0].message.content;
};
let active_collectors = [];
let guild;
bot.on('ready', async () => {
console.log(`Logged in as: ${bot.user.tag}!`);
guild = await bot.guilds.get(process.env.GUILD);
});
bot.on('message', async (message) => {
if (message.author.bot) return;
const filter = (m) => m.author.id === message.author.id;
if (/*Moderation Stuff*/) {
/*Moderation Stuff*/
}
if (message.channel.id === CHANNEL_ID && message.content.includes('?')) {
const question = message.content.trim();
const prompt = {
messages: [
{ role: 'system', content: 'Let\'s speak about Salesforce. I want you to give me wrong answers whenever I ask a question. The answer should be evidently wrong, and the formulation should be very troll-y. Make sure to insert expletives as needed to ensure it is evident your answers are a joke.' },
{ role: 'system', content: 'The vernacular used should be that which would be used by a teenager.' },
{ role: 'system', content: 'Insert jokes in your answers, making sure that it is visible that it is a joke.' },
{ role: 'system', content: 'Make references to Geek culture, including but not limited to "Star Wars, the Salesforce Awakens", pokemon, digimon, game of thrones, LOTR, Friends, Big Bang Theory.' },
{ role: 'user', content: 'Who founded Salesforce?' },
{ role: 'system', content: 'It was actually a group of Jedi knights who decided to use the Force to create a CRM platform. They called it "Salesforce Awakens."' },
{ role: 'user', content: 'How many employees does Salesforce have?' },
{ role: 'system', content: 'Well, after the recent layoffs, I think theyre down to like three or four people. Its basically just Ross, Rachel, Chandler and Joey from Friends running the whole thing now.' },
{ role: 'user', content: 'What are some popular Salesforce products?' },
{ role: 'system', content: 'Well, there s Sales' },
{ role: 'user', content: question },
]
};
const response = await generateResponse(prompt.messages, model, temperature, maxTokens);
message.channel.send(response);
}
});
bot.login(process.env.TOKEN);
The Outcome
Testing the bot was perfect:
So we just had to wait for April Fools to launch it live.
Thoughts on OpenAi Code Generation
One could read this article and think that this shows OpenAI's limits.
The truth though is that OpenAI mostly wrote code that integrated Discord and itself.
In total, writing the bot took 6 hours of messing around. You have to remember that I am not a developer, so a good dev could have gone faster, but without OpenAI I would have taken something like an extra 6 hours.
The reality is that:
- I could have fed OpenAI its own documentation as a basis
- I could have fed OpenAI the documentation of discord.js as a basis
- I could have given it a prompt telling it exactly what I wanted step by step
- the Codex (code generation part of OpenAI) normally requires you give pseudocode with comments to tell it what to populate.
And despite having low information and badly structured requests (by design), OpenAI wrote code that worked after tweaks.
If anything, my opinion is that it will be much more interesting medium term to learn "how to prompt" and "how to pseudocode" so things like ChatGPT can function correctly, so that a lot of boilerplate is removed.
Plus we got our April Fools joke, so yay on that. :)
The Actual Jokes Day
Due to the Server being active mostly during Weekdays, we fired up the bot a day earlier than actual April Fools.
In total, we serviced roughly 2 500 messages, for a total cost of $0.65. Yes, that's 65 cents.
Below are my personal favourite generated messages. Please remember that all this was autogeenrated by OpenAI using the custom prompt above and are not representative of anyone's real opinion.