Business Function
Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels.
Group Technology enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group Technology, we manage the majority of the Bank's processes and inspire to delight our business partners through our multiple banking delivery channels.
Responsibilities
- Design and implement key components for highly scalable, distributed data collection and analysis system built for handling petabytes of data in the cloud
- Move architecture and implementation through the development pipeline, from research to deployment
- Work with architects from other divisions contributing to this analytics system and mentor team members on best practices in backend infrastructure and distributed computing topics
- Analyze source data and data flows, working with structured and unstructured data
- Manipulate high-volume, high-dimensionality data from varying sources to highlight patterns, anomalies, relationships and trends
- Analyze and visualize diverse sources of data, interpret results in the business context and report results clearly and concisely
- Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms
- Work side-by-side with product managers, software engineers, and designers in designing experiments and minimum viable products
- Build and optimize classifiers using machine learning techniques and enhance data collection procedures that is relevant for building analytic systems
- Discover data sources, get access to them, import them, clean them up, and make them model-ready. You need to be willing and able to do your own ETL
- Create and refine features from the underlying data. You'll enjoy developing just enough subject matter expertise to have an intuition about what features might make your model perform better, and then you'll lather, rinse and repeat
- Run regular A/B tests, gather data, perform statistical analysis, draw conclusions on the impact of your optimizations and communicate results to peers and leaders
Requirements
- 10+ years of Experience in one or more areas of big data and machine learning
- The ability to work with loosely defined requirements and exercise your analytical skills to clarify questions, share your approach and build/test elegant solutions in weekly sprint/release cycles
- Development experience in Java/Scala and pride in producing clean, maintainable code
- Practical experience in clustering high dimensionality data using a variety of approaches
- Real world experience in solving business problems by deploying one or more machine learning techniques
- Experience creating pipelines to analyze data, extracted features and updated models in production
- Independence and self-reliance while being a pro-active team player with excellent communication skills
- Hands-on development with key technologies including Scala, Spark, and other relevant distributed computing languages, frameworks, and libraries
- Experience with distributed databases, such as Cassandra, and the key issues affecting their performance and reliability
- Experience using high-throughput, distributed message queueing systems such as Kafka
- Familiarity with operational technologies, including Docker (required), Chef, Puppet, ZooKeeper, Terraform, and Ansible (preferred)
- An ability to periodically deploy systems to on-prem environments
- Mastery of key development tools such as GIT, and familiarity with collaboration tools such as Jira and Confluence or similar tools
- Experience with Teradata SQL, Exadata SQL, T-SQL
- Strong experience in graph and stream processing
- Experience in migrating SQL from traditional RDBMS to Spark and BigData technologies
- Experience in building language parsers using ANTLR, query optimizers and automatic code generation
- In-depth knowledge of database internals and Spark SQL Catalyst engine
Apply Now
We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognizes your achievements.
Primary Location
Singapore-DBS Asia Central
Job
Risk
Schedule
Regular
Job Type
Full-time
Job Posting
Oct 9, 2024, 3:37:33 AM