Category Archives: Development

Davinci – Text Editor Color Theme Translation

Davinci is the result of my first foray into Open Source Software – When I have attempted to change my default text editor of the years, the one thing that consistently prevented me from doing so was not being able to get used to the slightly different color schemes in the new editor.

I set out to fix this problem using Sublime – my editor of choice, and Atom, which seems like the new up-and-comer in the world, and I created Davinci – a ruby command-line gem that parses an intake file or directory and spits out a new theme for a different editor. You can find it here: https://github.com/mattcheah/davinci.

Material theme comparison
How does this witchcraft happen?

Simply put, a parser (xml or regex) reads a theme file and searches for specific settings which it slots into an options hash for specific pieces of code (comments, strings, function names, function arguments, integers, constants, language-specific variables, etc.) A template from the output text editor is then searched to replace option-specific strings with the values parsed from the original document.

For future releases, I intend to add additional I/O controllers for new text editors like Brackets or Dreamweaver (or Vim, for the crazies.)

A Theme Translating Tool That Works Perfectly Every Time?

Revolutionary! Mind-Blowing! Inconceivable!

No. Someone lied to you. Each text editor has it’s own method of adding specific styles to its code. Atom is built on the Google chrome platform so it’s underlying architecture is similar. This also means that all content on the page is styled using CSS by applying different html classes to pieces of code.

Sublime, on the other hand, is styled by an XML document that specifies different schemas for code elements, with each schema nesting more levels that apply to more specific code elements.

As such, there is no one-to-one ratio for applying colors to specific code, meaning that often times there’s some guess work involved in assigning a color from a schema over to a class, or vice versa.

This is further complicated by the fact that not all theme developers will write their themes using the same schemas or classes, and therefore sometimes Davinci will use the color assigned to a general schema when it was intended for a more specific element.

As an example of the difficulties encountered – Atom considers a variable passed to a Ruby block to be a syntax--variable, but Sublime does not consider this to be related to a variable, meaning that this color must be changed manually if one wanted the new theme to be a perfect copy.


This was an interesting discovery for me, because it led to the realization for this project at least, there are some things which i could fix, but they would require a lot of time and energy, and would probably result in a lot of code bloat. Sometimes it’s easier to do the work by hand, unfortunately.

An Open Source Adventure!

This project is about more than just helping myself to move on from Sublime, although i’m sure many of us could use a hand with that. It’s also about learning how to work in OSS – as a developer I feel I’ve been rather compartmentalized in my own world working on my own projects, and I know that opening myself up to feedback and collaboration would probably do wonders for my ability to think about developing software as a part of a team and not as a one-man-band.

As such, I’m looking for feedback regarding several issues:

  • How efficient my code is:
    • Is it intuitive? Is it DRY?
    • Is it easy to understand and follow along?
    • Does it violate anything that might be considered ‘The Ruby Way’?
  • How well does it solicit collaboration? Have I included anything that is off-putting to potential collaborators?
  • How can I improve the concept?

I’ve already learned quite a lot from simply taking ownership and solving an issue I wanted to solve, but I’m hoping that collaboration on Davinci in the future will lead me to even more improvements in the way I write code and in the way I interact with other developers.

Davinci can be found on rubygems under the name davinci-text

Alexa Skill Certification: Complete


Amazingly, I have done it.
This was a fairly difficult undertaking and took me through multiple development learning phases including:

  • Understanding how Alexa interprets intents and slots, and how it routes information through those intents
  • Figure out how Amazon employs account linking and uses OAuth
  • Using Node for the first time and working out kinks in HTTP requests
  • Being forced to figure out how promises work make sure that all API requests go through.
  • Deal with several quirks in the new Alexa Skill Builder (beta)

I’ll go through some of these in detail and cover how I dealt with the issues as they came up.

Intents and Slots:

One of the ridiculous things I said prior to building this app was that I wouldn’t need any custom slot types, which came from not really understanding how slots worked and how I could pass information to the app with custom slots. Obviously my ‘do-list items’ had to be passed in as a custom slot type. Here were a few more things I learned regarding intents and slots:

Any intent can be triggered from any interaction

I set up my skill separated into different states so depending on what state a user was in, they might get different responses if they trigger the same intent. I used this to funnel users down a path with the least likelihood of triggering an intent that they did not want. For example, when Users say “Alexa, ask calendar to-do to add something to my list”, they would enter a state within the app for which there should only be two options – Say an item to add to the list, or cancel. (or help, or stop, etc etc, whatever.) However, I learned that these states don’t necessarily prevent a user from triggering an intent that is not listed in that state – from this state of adding an item, the user can say something and if Alexa interprets that to mean that you want to remove an item from your do list, you’re out of luck. The only way to get around that is to run the same code regardless of whether you’ve triggered one intent or the other.

Interpreting any string in a slot can be a lot of work

People can put an unlimited variety of things in their to do lists, and my paltry list of 20-something sample utterances simply did not recognize when I said something abnormal like ‘brush my chinchilla with salsa.’  Interpreting literal strings, as I discovered, was much easier to do if the sample utterances had a larger base to draw from – cue me coming up with some of the most outrageous to do list tasks ever until I had ~100. (eg. “broker a peace agreement in the middle east”, “solve America’s opioid epidemic”, “reform the criminal justice system”, “run America like a company”, and many other items that might be on random guy’s list who we’ll call, say, Jared.) That seemed to solve the problem. Thanks Jared!

Account Linking:

Account linking is literally the easiest thing when you’re connecting to a Google API. You don’t need to know anything about OAuth, and you don’t need to do anything extraordinary. Unfortunately, I didn’t know that it was easy, and it ended up being one of the most frustrating things throughout the development process.

The backbone of the skill is making API requests to google calendar to read and write your to do list, thus the user has to link their account within Alexa App. The easiest way to do it, (if you’re using a Google API, or presumably another API that uses OAuth, is to set up your google API in the console, and then copy the all of the required information into the alexa account linking section in the configuration tab. (Client secret, Auth URL, client ID, Token URI, etc.) – note: I read that you have to add subdomains into the domain list if you want to pull information from a subdomain. I have it set up this way but I am not sure if it is true. (ie. I have both google.com and accounts.google.com in my domains list to make sure that the account linking works getting an access token from https://accounts.google.com/o/oauth2/token)

The biggest problem i ran into was the access token not getting refreshed, and I had to re-link the skill every hour if I still wanted to use it – obviously not an acceptable issue. After scouring the web far and wide I found a forum post saying that I had to add `?access_type=offline` to my authorization URL, so it looked like this: https://accounts.google.com/o/oauth2/auth?access_type=offline. This solved my issue immediately.

NodeJS:

For my first time using node I was fairly happy with how simple it seemed. There’s something about writing in normal JS that’s so calming and doesn’t make you want to pull your hair out and smash things. All of the modules are well documented and there seems to be a good community of developers with plenty of answers to difficult questions. I’m looking forward to using Node in the future for sure.

actually making the HTTP requests was a little more frustrating, since the entire concept is a little bit over my head, but again the documentation in the request module and the simplicity of trying new things was good enough to get me started. The real problem was when I started sending responses to Alexa before I had received a response from the server, which led me to…

Promises!

Having not had to make a ton of continuous API requests in the past, I had only really known about the concept of promises without really understanding how to use them. I had to really get into it and figure out how to chain my multiple api requests together and get those responses before I could move onto the next step. Suffice it to say, I’m extremely happy that I was forced to learn that, as I’ve been in multiple situations where I just descended down the callback spiral and found myself wondering why there wasn’t a better way. Now I know!

Some frustrations:

Documentation

I may be completely wrong about this but my overall feeling with development was that there were a lot of sample projects that you could work on that would help you understand how to develop a skill, but there was no documentation devoted specifically to development – what the Alexa object was, how to get a slot value, how to set up different states, etc. All of this information could be found by poking around different examples, but as with everything in the development world, people tend to do things differently and if your project isn’t set up the same way as someone else’s that can tend to be frustrating. There was a bunch of documentation set up to discuss the concepts of intents, slots, account linking, responses, etc. but there was not very much discussing how to put those concepts into practice.

Testing

This is more of an issue with AWS Lambda, but do I really have to upload my entire project every time I make a change to my code? Is there no possible way to have a text editor that can tweak something very quickly, or just allow for me to upload 1 document instead of the multiples that I have in a zipped file? Making minute changes when testing the service was one of the most time-consuming, frustrating processes because it would take 5 minutes to test just one thing. I’m sure there is a better way (eg. hosting the code myself.) but I don’t know how to do that so I guess i’m stuck with the lambda environment.

Support

This is no one’s fault in particular, but I noticed that the developer community for this particular product is much smaller than a community for a language or framework (as is to be expected) so it takes a much longer time to find out if someone is having the same problems as you or how to solve said problems. Several of my questions on the amazon developer forum went unanswered and though I do appreciate that they have amazon staff working to respond to people’s questions, it’s still a pain to have to wait so long for an answer before you can continue with your project. As I move into more projects like this though, I suppose I should get used to it.

I’m done!

If you want to take a look at my skill and use it, you can find it here:
The github page for the skill can be found here: https://github.com/mattcheah/alexa-calendar-to-do, if you want to make any changes or add any functionality for yourself.

I have a few planned improvements when I have time – namely that Amazon is now allowing developers access to the existing to-do lists on that a user has on their apps, so I’d like to add the option of downstream and upstream syncs so that when you use your calendar do list skill, it will pull all of the information from the built in Alexa do list and add it to the list on your calendar.

I’d love to hear feedback or any other thoughts! Thanks for reading guys.

Starting a new Alexa skill from scratch

That’s right! I’m going to attempt the impossible.

Since I don’t have a ton of fun smart lights or windows or doors etc etc laying around my house, I figured I’d utilize Alexa for skill planning and time management. My wife and I usually have a “to do” list in our shared google calendar. This to-do list moves every day, if not to the current day then to the day when we will have time to do the things on the list. Sometimes there are busy days when we don’t have anything to do. Sometimes there are free days, and my wife will move the list to that day.

Alexa already has functionality to tell me what’s on my calendar for the day I’m asking about. Alexa doesn’t, however, have functionality to read the description of that event, parse it, and give me a list of everything on my to-do list. (or even add to it and subtract from it.) I’d like to leverage the Google Calendar API and an AWS lambda function to create a skill that will help me to read/write to the calendar and to give me an option to hear the list of things I need to do that day.

So getting started, I need to write out/create my:

  • Intent Schema: Ideas for this include “GetAllDailyItems”, “PeekDoList”, “AllDoList”, “AddToDoList”, “CompleteFromDoList”, “Help”
  • Custom Slot Types: I don’t anticipate needing to create any custom slot types but we will have to see how I can interpret data from the Google calendar API and pass it to the lambda function.
  • Built-In Slots: After a brief review of the built in slots, I might need “Day of the week” and “month”.
  • Sample Utterances:
    • “PeekDoList Alexa, What do I have to do today?”
    • “PeekDoList Alexa, What’s on my to-do list?”
    • “AllDoList Alexa, Read all of my items on my to do list.”
    • “GetAllDailyItems Alexa, What is on my calendar today?”
    • “AddToDoList Alexa, put {item} on my calendar do list today.”
    • “CompleteFromDoList Alexa, mark {item} on my calendar do list as complete.”
    • “Help Alexa, help.”
  • A Visual Representation of The Menu/Model: I’ll take a stab at this after setting up the intents and samples properly.
  • Companion App Cards: Not sure if this will be needed as the visual representation will be right in the google calendar.

I’m sure this will be messy but we’ll try to get it done.

Playing with Alexa

Today I started working on a new project – writing a simple trivia game to be played using Amazon’s Alexa service. By writing, of course, I mean copying someone else’s work as a template and following a tutorial to get the results I wanted even though I never actually knew what I was doing.

From my understand of this though, there are two sections to publishing a ‘Skill’ that can be used by an Amazon Echo/Dot/Tap/etc.
The first is a Lambda function from Amazon Web Services which contains all the code for your skill. This is connected to your Amazon developer console where you can create Alexa skills and set up instructions for how your code interacts with Alexa. From this console you can test your created skill and submit it for publishing.

The entire scope of AWS seems massive, and the Lambda function was just one of around 50 services used for computation, development, data management, security, etc. That alone was extremely intimidating, but there were also so many services that Amazon can connect to and so many different ways to use Alexa that it feels like it would be very difficult to get the hang of everything. I’m assuming that most people only focus on one kind of thing though.

For now i’d be interested to learn more about how Alexa interacts with outside services. I’m very interested in home automation and I’d love to integrate my own skills that can be used inside my house. I guess we’ll see how this goes with the next few alexa projects.