Home
Jobs

3773 Scala Jobs - Page 46

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know Our Team The Data department , based in Bangkok , oversees all of Agoda’s data-related requirements. Our ultimate goal is to enable and increase the use of data in the company through creative approaches and the implementation of powerful resources such as operational and analytical databases, queue systems, BI tools, and data science technology. We hire the brightest minds from around the world to take on this challenge and equip them with the knowledge and tools that contribute to their personal growth and success while supporting our company’s culture of diversity and experimentation. The role the Data team plays at Agoda is critical as business users, product managers, engineers, and many others rely on us to empower their decision making. We are equally dedicated to our customers by improving their search experience with faster results and protecting them from any fraudulent activities. Data is interesting only when you have enough of it, and we have plenty. This is what drives up the challenge as part of the Data department, but also the reward. The Opportunity Please note -The role will be based in Bangkok. We are looking for ambitious and agile data scientists that would like to seize the opportunity to work on some of the most challenging productive machine learning and big data platforms worldwide, processing some 600B events every day and making some 5B predictions. As part of the Data Science and Machine Learning (AI/ML) team you will be exposed to real-world challenges such as: dynamic pricing, predicting customer intents in real time, ranking search results to maximize lifetime value, classifying and deep learning content and personalization signals from unstructured data such as images and text, making personalized recommendations, innovating algorithm-supported promotions and products for supply partners, discovering insights from big data, and innovating the user experience. To tackle these challenges, you will have the opportunity to work on one of the world’s largest ML infrastructure employing dozens of GPUs working in parallel, 30K+ CPU cores and 150TB of memory. In This Role, You’ll Get to Design, code, experiment and implement models and algorithms to maximize customer experience, supply side value, business outcomes, and infrastructure readiness Mine a big data of hundreds of millions of customers and more than 600M daily user generated events, supplier and pricing data, and discover actionable insights to drive improvements and innovation Work with developers and a variety of business owners to deliver daily results with the best quality Research discover and harness new ideas that can make a difference What You’ll Need To Succeed 4+ years hands-on data science experience Excellent understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks Significant proficiency in SQL and languages like Python, PySpark and/or Scala Can lead, work independently as well as play a key role in a team Good communication and interpersonal skills for working in a multicultural work environment It’s Great if You Have PhD or MSc in Computer Science / Operations Research / Statistics or other quantitative fields Experience in NLP, image processing and/or recommendation systems Hands on experience in data engineering, working with big data framework like Spark/Hadoop Experience in data science for e-commerce and/or OTA We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance available for eligible candidates. #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #philadelphia #dallas #houston #austin #seattle #sydney #melbourne #perth #toronto #vancouver #montreal #shanghai #beijing #shenzhen #prague #Brno #Ostrava #cairo #alexandria #giza #estonia #paris #berlin #munich #hamburg #stuttgart #cologne #frankfurt #hongkong #budapest #jakarta #bali #dublin #telaviv #milan #rome #venice #florence #naples #turin #palermo #bologna #tokyo #osaka #kualalumpur #malta #amsterdam #oslo #manila #warsaw #krakow #doha #alrayyan #riyadh #jeddah #mecca #medina #singapore #seoul #barcelona #madrid #stockholm #zurich #taipei #tainan #taichung #kaohsiung #bangkok #Phuket #istanbul #london #manchester #edinburgh #hcmc #hanoi #lodz #wroclaw #poznan #katowice #rio #salvador #newdelhi #bangalore #bandung #yokohama #nagoya #okinawa #fukuoka #jerusalem #mumbai #bengalulu #hyderabad #pune # #IT #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your Primary Responsibilities Include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Preferred Education Master's Degree Required Technical And Professional Expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred Technical And Professional Experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About this position: We are eagerly seeking candidates with 5 to 10 years’ experience for a Lua Kong API Gateway Developer, to join our dynamic team. The ideal candidate will play a pivotal role within the team to who is a skilled professional with exposure to cloud platforms, DevOps, and data visualization, preferably with Financial Domain exposure. You will collaborate with internal teams to design, develop, deploy, and maintain software applications at scale Role: Lua Kong API Gateway Location: Xoriant Location Experience: 5 to 10 years Job type: Full time Work type: Hybrid Impact you will realize: Job Responsibilities Coordinate with Architecture and Network Engineering to understand and develop platform architecture Develop a Dockerized api-proxy solution built on the Kong API Gateway written with and using Lua best practices Work with CloudFormation templates to extend and refine our AWS infrastructure. Including, but not limited to:Utilizing auto-scaling with Docker containers running on EC2 Develop and manage the entire AWS stack and all its components RDS, Elasticache, etc Understand and define performance level needs for the platform Design, implement, execute, and report performance testing results Define Cloudwatch logs, alarms, troubleshoot them and fix issues in a defined release cadence Integrating with third party products integrating with Cloudwatch Manage IAM permissions and work with DevOps to maintain “least privilege” Coordinate with other teams to provide API contract guidance and implement routing for their microservices Develop and refine Jenkins CI/CD pipelines to deploy code, run acceptance tests, and monitor environment health Effectively collaborate with cross geo team (Dev team working out of India-Pune and USA-Salt lake City) and willing to stretch at times Effectively collaborate with TS/TAM/NOC to address queries and concerns Key skills you will require: Primary Skills Experience with DevOps tools and processes Jenkins, Git, Docker Scripting: Unix, Shell, Groovy, Python Experience in one or more of the following software languages: Kong-LUA (Scripting Languages Python, Scala) Experience designing, developing, deploying and supporting RESTful APIs. Experience with developing services, clients and multi-threaded software. Experience developing with SQL Server or equivalent Working knowledge of unit testing and test automation Working knowledge of user stories and use cases Working knowledge of object-oriented software design and design patterns. Comfortable working in a fast-paced environment. Secondary Skills: nginx experience could be good to (or any sort of reverse proxy type stuff) Microservice architecture knowhow Familiarity with Swagger Familiarity with authentication methods Experience(s) as a technical or team lead or equivalent experience Experience with telecommunications/telephony Qualification you must require: Bachelors or master’s with Computer Science or related field Why should you join Xoriant? Xoriant is a trusted provider of digital engineering services, renowned for building and operating complex platforms and products at scale. With three decades of software engineering excellence, we combine modern technology expertise in Data & AI (GenAI), cloud & security, domain and process consulting to solve complex technology challenges. We serve over 100 Fortune 500 companies and tech startups on their journey to becoming unicorns and beyond. As a "right-sized" company, we bring agility through our 5000+ passionate XFactors (our employees) from over 20 countries, fostering a culture focused on purpose and employee happiness. Want to experience life at Xoriant? In our inclusive workspace, we turn imagination into reality — everyday! Business for Purpose: Be part of a passionate team and create a better future through tech & innovation. Giving Back to Community: Build a stronger business and community by volunteering and make a positive impact in the community. Rise to Sustain: Support your career growth in a way that helps ensure long-term success. Continuous Learning: Stay curious and keep learning with us to drive innovation. Wellness First: Prioritize well-being with multiple health benefits & experience work-life balance. Rewards & Recognition: Value your work with meaningful rewards and recognitions. One Xoriant Family: Celebrate the joy of diversity, inclusivity and togetherness through festivals. Candid Connects: Connect directly with leaders and voice your opinion. Culture of Ideation: Be a trailblazer, bring new ideas to the fore and realize them through engineering. If there’s an XFactor in you, we have a chair dedicated to your name. To know more about Xoriant, please visit: www.xoriant.com Important Notice: We have been alerted that some job candidates, who posted their resumes on specific websites and portals, have been approached by imposters posing as Xoriant and making deceptive offers using Xoriant branding. Xoriant communications from website, official email addresses, and verified social media accounts only should be considered legitimate. Xoriant will never ask for payment during the recruitment process, nor have we authorized any external agencies to collect a fee on our behalf. Avoid sharing your personal details until you verify the offer's legitimacy. Cross-check the credentials of anyone claiming to represent Xoriant with our official HR department. If you receive any suspicious job offers or fraudulent communication bearing Xoriant branding, contact us at careers@xoriant.com immediately. Equal Employment Opportunity Statement: We are committed to providing equal employment opportunities to all individuals, regardless of race, color, religion, gender, national origin, age, disability, or veteran status. Our inclusive workplace values diversity and ensures that all employees are treated fairly and with respect, promoting a culture of belonging. We strive to create a supportive environment where everyone has the opportunity to succeed and contribute to our collective success. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Preferred Education Master's Degree Required Technical And Professional Expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences Show more Show less

Posted 1 week ago

Apply

10.0 - 17.0 years

15 - 20 Lacs

Pune

Hybrid

Naukri logo

Job Title: Scala Technical/Application Architect Employment Type: [Full-time] Experience Level: 10+ Years About Cybage: Cybage Software is a technology consulting organization and the head office is in Pune; you will get an opportunity to be a part of highly skilled talent pool of more than 7000 employees. We have our operations hub in GNR and Hyderabad as well and we have also marked our presence in USA, UK, Japan, Germany, Ireland, Canada, Australia, and Singapore. We provide seamless services and dependable deliveries to our clients from diverse industry verticals such as Media and Advertising, Travel and Hospitality, Digital Retail, Healthcare and Life Sciences, Supply chain and logistics, and Technology. About the Role: We are looking for an experienced Scala Architect to design and lead the development of scalable, high-performance systems. Youll bring deep expertise in Scala (2 & 3), Akka Streams, and domain-driven design to architect systems capable of handling high transaction volumes. Required Skills and Qualifications: 10+ years of overall software development experience, with 4+ years in Scala development, including both Scala 2.x and 3.x. Proven experience architecting and delivering highly scalable, transactional platforms. Expertise in Akka Streams, Akka HTTP, and related reactive programming libraries. Strong grasp of domain-driven design (DDD) and functional programming principles. Deep understanding of streaming architectures, back-pressure handling, and event-driven systems. Demonstrated experience leading technical design efforts for mission-critical applications. Proficient in integrating with modern CI/CD, testing, and deployment pipelines. Familiar with cloud-native architectures (e.g., AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes). Responsibilities: Architect scalable, distributed systems using Scala and Akka Design domain models aligned with business needs Ensure performance for high-volume, transactional workloads Lead and mentor teams in Scala, FP, and Akka Streams Own technical design, documentation, and delivery Collaborate across teams for end-to-end solution success Shift timings General shift Location Pune-Kalyani Nagar It's good to have: 60% & above in any two of the following and 55% & above in the third Secondary, Higher Secondary (or it s equivalent) and Graduation level (aggregate) Strong Communication & interpersonal Skills.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurugram, Haryana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 3 years of experience in building Machine Learning or Data Science solutions. Experience in Python, Scala, R, or related, with data structures, algorithms, and software design. Ability to travel up to 30% of the time as needed. Preferred qualifications: Experience with recommendation engines, data pipelines, or distributed machine learning with data analytics, data visualization techniques and software, and deep learning frameworks. Experience in software development, professional services, solution engineering, technical consulting with architecting and rolling out new technology and solution initiatives. Experience with Data Science techniques. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, Extract, Transform, and Load/Extract, Load and Transform (ETL/ELT) and reporting tools and environments. Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent communication skills. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. In this role, you will play a role in ensuring that customers have the best experience moving to the Google Cloud machine learning (ML) suite of products. You will design and implement machine learning solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with machine learning, and will travel to customer sites to deploy solutions and deliver workshops designed to educate and empower customers to realize the potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product code, and address customer and partner needs.You will lead the execution of adopting the Google Cloud Platform solutions to the customer’s requirements.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver big data and machine learning solutions and solve technical customer tests. Act as a trusted technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver recommendations, tutorials, blog articles, and technical presentations adapting to different levels of business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 week ago

Apply

13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are Brainlabs, the High-Performance media agency, on a mission to become the world's biggest and best independent media agency. We plan and buy media that delivers profitable and sustained business impact. And what’s our formula? Superteams of Brainlabbers, fueled by data and enabled by technology. Brainlabs has always been a culture-first company. In fact, from the very beginnings of the agency a set of shared principles, philosophies and values was documented in The Brainlabs Handbook, helping us create our unique culture. As with everything here we always seek to adapt and improve so The Brainlabs Handbook has been fine-tuned to become The Brainlabs Culture Code. This Culture Code consists of 12 codes that talk to what it means to be a Brainlabber. It’s a joint commitment to continuous development and creating a company that we can all be proud of, where Brainlabbers can turn up to do great work, make great friends and win together. You can read The Brainlabs Culture Code in full here. Classification: Full time Team: Data Science Practice Reporting to: Head of Data Services Location: Bengaluru (Hybrid model of working) Experience Range - 13-15 Years Of Experience. As a Director, Data Science you’ll execute data science initiatives, influence senior leadership and shape the data science roadmap for the organization. The ideal candidate will have extensive experience in statistical modeling, machine learning, Generative AI, MLOps, and cloud-based solutions, along with a track record of leading large-scale projects, mentoring high-performing teams, and deploying models into production. What you do: Strategy & Roadmap: Build and define Brainlabs’ global Data Science capability roadmap, focused on advanced media optimisation and marketing effectiveness Product Development: Work through all stages of product development: defining capability, crafting business solutions, gathering/structuring data, building ML/DL/AI models, and communicating business impact. Innovation: Research and bring innovations to develop next-generation solutions in marketing & media analytics, incorporating advancements like GPT, LLMs, Agentic AI and multimodal data pipelines Team Leadership: Grow and mentor a high-performing data science team, driving thought leadership, R&D culture, and global collaboration. Pre-sales Support: Estimate project efforts, creating detailed proposals, and developing Statements of Work (SOWs). respond to Requests for Proposals (RFPs) with innovative, tailored analytics solutions. Client/Stakeholder Communication: Engage with diverse stakeholders, including clients, team members, and management. Organizational Initiatives: Lead key organizational initiatives focused on enhancing operational efficiency, service quality, and employee engagement. Analysis: Ability to analyze data and prescribe model-based recommendations/actions for the end users. Process Management: Manage projects in an Agile environment. Who You Are Experienced: A visionary leader with 13-15 years in Data Science projects, 4+ years in Marketing Analytics Educated: Bachelor’s/Master’s degree in relevant field (Math, Statistics, Analytics, Operational Research or related disciplines) Technically Skilled: Experience building ML/DL/NLP/AI models (Python, R, Scala). Experienced with frameworks like TensorFlow, PyTorch, YOLO as well as marketing specific models like Meridian, Robyn and PyMC LLM & AI Specialist: Deep understanding of Agentic AI frameworks (AutoGPT, BabyAGI, OpenAgents) and LLMs (GPT, Claude, Gemini, LLaMA) Cloud & MLOps Fluent: Familiar with MLOps, ML safety and building scalable architectures on AWS, GCP, Azure, or Nvidia stack How You Succeed Culture: You will live our culture code every day! Client Success: Client satisfaction targets being met (internal & external) Deliver Business Impact: Translate complex analytical challenges into actionable solutions that drive measurable outcomes and revenue growth. Lead & Influence: Effectively lead interdisciplinary teams, provide thought leadership, and collaborate with various stakeholders. Lead with Innovation: Stay ahead of trends in Agentic AI, AI safety, regulatory frameworks etc. ,to maintain a competitive edge. Quality: Ensure quality across the deliverables Process Adherence: Enforce compliance with standard operating procedures, best practices, and organizational guidelines. Business Growth: Identify opportunities to expand existing accounts and develop new business by understanding client needs, proposing value-driven solutions, and fostering long-term relationships. Brainlabs delivers high performance for our clients and ourselves. We do that through our high-performance culture. That means each and every one of us has very clear goals every year which are the focus for our own contribution, growth and coaching. Practice Efficiency Working effectively to manage your own time in accordance with client requirements Client Client satisfaction One client case study produced to show the impact of your work on growing client business Practice Growth Learning certification/hours met Actively contributing ideas and innovations to the wider practice Technology & QA Adoption target of 100% met 95% QA metric hit Process automation Live our Culture Code Give 5 meaningful pieces of feedback each month Receive 360 positive feedback on how you live our Culture Code Grow Knowledge Complete Brainlabs Data Science Advanced Certification Brainlabs Internal Mentor accreditation (Optional) What happens next? We know searching for a job is tough and that you want to find the best career and employer for you. We also want to ensure that this position is the best fit for both you and us. Therefore, you will participate in a comprehensive interview process that includes skills interviews with our team. The goal of this process is to allow you to get to know us as we learn more about you. Brainlabs actively seeks and encourages applications from candidates with diverse backgrounds and identities. We are proud to be an equal opportunity workplace: we are committed to equal opportunity for all applicants and employees regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion, or belief, and marriage and civil partnerships. If you have a disability or special need that requires accommodation during the application process, please let us know! Please note that we will never ask you to transfer cash or make any other payment to us in order to apply for a role or to work for Brainlabs. Any such asks are fraudulent and should be reported to the appropriate authorities in your area. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Role Overview: We are looking for a highly skilled and experienced Senior ETL & Data Streaming Engineer with over 10 years of experience to play a pivotal role in designing, developing, and maintaining our robust data pipelines. The ideal candidate will have deep expertise in both batch ETL processes and real-time data streaming technologies, coupled with extensive hands-on experience with AWS data services. A proven track record of working with Data Lake architectures and traditional Data Warehousing environments is essential. Key Responsibilities: Design, develop, and implement highly scalable, fault-tolerant, and performant ETL processes using industry-leading ETL tools to extract, transform, and load data from various source systems into our Data Lake and Data Warehouse. Architect and build batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka or AWS Kinesis to support immediate data ingestion and processing requirements. Utilize and optimize a wide array of AWS data services, including but not limited to AWS S3, AWS Glue, AWS Redshift, AWS Lake Formation, AWS EMR, and others, to build and manage data pipelines. Collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into efficient data pipeline solutions. Ensure data quality, integrity, and security across all data pipelines and storage solutions. Monitor, troubleshoot, and optimize existing data pipelines for performance, cost-efficiency, and reliability. Develop and maintain comprehensive documentation for all ETL and streaming processes, data flows, and architectural designs. Implement data governance policies and best practices within the Data Lake and Data Warehouse environments. Mentor junior engineers and contribute to fostering a culture of technical excellence and continuous improvement. Stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Required Qualifications: 10+ years of progressive experience in data engineering, with a strong focus on ETL, ELT and data pipeline development. Deep expertise in ETL Tools: Extensive hands-on experience with commercial or open-source ETL tools (Talend) Strong proficiency in Data Streaming Technologies: Proven experience with real-time data ingestion and processing using platforms such as AWS Glue,Apache Kafka, AWS Kinesis, or similar. Extensive AWS Data Services Experience: Proficiency with AWS S3 for data storage and management. Hands-on experience with AWS Glue for ETL orchestration and data cataloging. Strong knowledge of AWS Redshift for data warehousing and analytics. Familiarity with AWS Lake Formation for building secure data lakes. Good to have experience with AWS EMR for big data processing . Data Warehouse (DWH) Knowledge: Strong background in traditional data warehousing concepts, dimensional modeling (Star Schema, Snowflake Schema), and DWH design principles. Programming Languages: Proficient in SQL and at least one scripting language (e.g., Python, Scala) for data manipulation and automation. Database Skills: Strong understanding of relational databases and NoSQL databases. Version Control: Experience with version control systems (e.g., Git). Problem-Solving: Excellent analytical and problem-solving skills with a keen eye for detail. Communication: Strong verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Preferred Qualifications: Certifications in AWS Data Analytics or other relevant areas. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of data processing frameworks and distributed computing. - Experience with programming languages such as Java, Scala, or Python. - Familiarity with cloud platforms and services related to application deployment. - Knowledge of database management systems and data modeling techniques. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role Overview We are seeking a dynamic and strategic Director of Data to lead our data function in Hyderabad. This role is pivotal in shaping our data strategy, building a high-performing team, and fostering a strong data community within the organization. The Director of Data will oversee data engineering, data analytics, and data science professionals, driving cohesion in ways of working and instilling a shared sense of purpose. The role is not a traditional line management position but will incorporate two key aspects of the role; leading and line managing the Flutter Functions Data Platform team and acting as the Data capability lead for Flutter Hyderabad office through focusing on leadership, influence, and strategic direction—creating career pathways, professional growth opportunities, and an inclusive and innovative culture. The Director of Data will also play a key role in expanding our Global Capability Center (GCC) in Hyderabad and establishing new teams for other businesses within the group as required. As part of the Hyderabad Leadership Team, the role holder will contribute to broader site leadership, culture, and operational excellence. Key Responsibilities Leadership & Strategy Define and drive the data strategy for Flutter Functions, ensuring alignment with product, architecture and the organization’s business objectives. Establish and grow the Global Capability Center (GCC) in Hyderabad, ensuring it becomes a centre of excellence for data. Lead a community of data professionals (engineering, analytics, and data science), creating a culture of collaboration, learning, and innovation. Serve as a key member of the Hyderabad Leadership Team, contributing to broader site leadership initiatives. Champion best practices in all aspects of data engineering from data governance, data management through to ethical AI/ML adoption. Partner with global and regional leaders to scale data capabilities across different businesses in the group as needed. Team Building & Development Foster an environment that attracts, develops, and retains top data talent. Build career pathways and professional development opportunities for data professionals. Drive cross-functional collaboration between data teams, engineering, and business units. Advocate for a diverse, inclusive, and high-performance culture. Operational Excellence & Ways of Working Enhance cohesion and standardization in data practices, tools, and methodologies across teams. Lead initiatives that improve efficiency, collaboration, and knowledge sharing across data teams. Ensure alignment with cloud-first, scalable technologies, leveraging Databricks, AWS, and other modern data platforms. Establish mechanisms to measure and demonstrate the business value of data-driven initiatives. Skills & Experience Essential Proven experience in a senior data leadership role, with a track record of influencing and shaping data strategy. Strong leadership skills with a people-first approach, able to inspire, mentor, and build a thriving data community. Experience working in global, matrixed organizations, driving collaboration across multiple teams. Deep understanding of data engineering, analytics, and data science disciplines (without requiring hands-on technical execution). Experience with cloud-based data technologies, particularly AWS, Databricks. Experience with streaming platforms such as Kafka, Apache Pulsar. Experience with a combination of Python, Scala, Spark and Java. Ability to scale teams and establish new functions, especially in a GCC or offshore model. Strong stakeholder management, capable of influencing senior executives and business leaders. Desirable Experience in building or scaling data teams in a Global Capability Center (GCC). Familiarity with data governance, security, and compliance best practices. Previous experience working in a hybrid or global delivery model. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Madhya Pradesh, India

On-site

Linkedin logo

Job Overview: We are looking for a AI/ML Developer to join our team of researchers, data scientists, and developers. You will work on cutting-edge AI solutions across industries such as commerce, agriculture, insurance, financial markets, and procurement. Your role involves developing and optimizing machine learning and generative AI models to solve real-world challenges. Key Responsibilities: • Develop and optimize ML, NLP, Deep Learning, and Generative AI models. • Research and implement state-of-the-art algorithms for supervised and unsupervised learning. • Work with large-scale datasets in distributed environments. • Understand business processes to select and apply the best ML approaches. • Ensure scalability and performance of ML solutions. • Collaborate with cross-functional teams, including product owners, designers, and developers. • Solve complex data integration and deployment challenges. • Communicate results effectively using data visualization. • Work in global teams across different time zones. Required Skills & Experience: • Strong experience in Machine Learning, Deep Learning, NLP, and Generative AI . • Hands-on expertise in frameworks like TensorFlow, PyTorch, or Hugging Face Transformers . • Experience with LLMs (Large Language Models), model fine-tuning, and prompt engineering . • Proficiency in Python, R, or Scala for ML development. • Knowledge of cloud-based ML platforms (AWS, Azure, GCP). • Experience with big data processing (Spark, Hadoop, or Dask). • Ability to scale ML models from prototypes to production . • Strong analytical and problem-solving skills. If you’re passionate about pushing the boundaries of ML and GenAI , we’d love to hear from you! Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 15 Lacs

Mumbai, Hyderabad, Chennai

Work from Office

Naukri logo

Job Title: Data Engineering Department: NPCI Data Analytics Overview:- NPCI is looking for Data Engineers, who are well versed with the Big data technologies. The candidate will be working on-premises handling the massive data, that will be served to the data science and data modelling team. He / She is responsible for data availability in the existing data pipelines also should be able to establish and make the data available in the new pipelines including specified criteria using latest tools and technologies in Data Engineering. Should be able to understand, analyze the requirement and well versed in providing the data using file or dashboards. Key Responsibilities:- Understanding the requirement and design the data pipelines to make the data available in the data lake for the further processing and usage for the downstream systems Take responsibility for 24/7 data availability, by implementing reliable, fault tolerant system and maintainability and availability of the data. Good knowledge on the data base including the tables, schema, view and executing the SQL complex queries based on the business requirements and providing the required output for the relevant stakeholders. Write Python, Scala scripts for different types of data filtering, Data conversion, Data cleaning. Should have good knowledge on the Bigdata concepts and the Hadoop eco systems. Should be a quick learner and capable of easily adopting to the new technologies. Good knowledge on Data Warehouse, Data Lake and Data mart. Candidate Profile: - Experience in building the data pipeline and data flow. Experience in handing the data Wrangling, data mining, data processing etc Experience in NOSQL data base. Working experience or knowledge on DBT, Dagster, Trino SQL / Hive SQL, MINio, S3 , Superset will be an advantage. Experience in Stream processing like Kafka , Spark and Flink. Experience in different data engineering tools and their configurations. Should be able to write code in different languages that includes Python, Scala , Java and basic working knowledge in Linux etc. Should take complete ownership on the tasks implementing the above tasks. Should be good in BI, visualizing technics, displaying the data insights appropriately to the end user and have good experience in the dashboards like Tableau, Superset etc. Job Description: - NPCI Data Analytics team is looking for data engineers who can make the data available from various products, from different sources, do different processing and store in different storage, in different formats that will be useful to the Data scientists team and other stakeholders to solve the business problems and also to analyze and improve their product performance. Should be able to quickly understand different products of NPCI and their domain terms, and their business flow so as to understand their requirements, pain points and help them by proving their relevant data that will be useful and utilized for their business. Should have strong business understanding, analytical and problem-solving skills and good programming knowledge.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 8 years Location: mumbai JobType: full-time About The Role We’re seeking a Lead Cloud Data Architect to design and lead the execution of enterprise-grade data platforms in a modern cloud environment. This is a strategic and hands-on role, ideal for a data expert who thrives in building scalable, high-performance ecosystems across Azure cloud services, big data platforms, and enterprise analytics tools. You’ll be instrumental in transforming our data infrastructure by driving architectural decisions, standardizing data governance, and ensuring secure, scalable, and accessible data systems. Your work will directly enable advanced analytics, reporting, and data science across the organization. What You’ll Be Doing Cloud Data Architecture Design modern, scalable data solutions on Microsoft Azure , integrating components such as Azure Data Lake, Azure Synapse, Azure SQL, and Azure Databricks. Build architecture blueprints to support streaming, batch, and real-time data processing. Data Pipelines & Engineering Build and orchestrate robust, fault-tolerant data pipelines using Azure Data Factory (ADF) , Databricks, and custom ETL frameworks. Drive transformation of structured, semi-structured, and unstructured data into optimized formats for downstream use. Big Data & Analytics Integration Utilize Azure Databricks for large-scale data processing, machine learning data prep, and distributed data transformations. Enable seamless data flows from lake to warehouse to visualization. Data Governance & Quality Implement robust data governance protocols including lineage, cataloging, classification, and access management. Ensure security, compliance, and adherence to regulatory data policies (GDPR, HIPAA, etc.). BI & Reporting Enablement Collaborate with analytics and reporting teams to provide highly available, performant, and clean data to tools like Power BI . Standardize KPIs, metrics definitions, and data sources for enterprise-wide reporting consistency. Collaboration & Leadership Engage with product owners, business leaders, and engineering teams to define long-term data strategies. Mentor engineers, review architectures, and set coding and documentation standards. What You Bring 8–10 years of experience in data architecture or engineering , including 5+ years designing solutions on Microsoft Azure . Expertise in Azure services: ADF, Azure Data Lake, Databricks, Synapse, Azure SQL , and Power BI. Proven experience building and optimizing ETL/ELT pipelines , with deep understanding of data transformation frameworks . Proficiency in Python , SQL , or Scala for scripting, data wrangling, and logic workflows. In-depth understanding of data modeling , data warehouse concepts, performance tuning, and storage optimization. Familiarity with DevOps and CI/CD pipelines in the context of data engineering. Strong collaboration, documentation, and communication skills — with the ability to work cross-functionally. Nice to Have Microsoft certifications such as Azure Data Engineer Associate or Solutions Architect . Hands-on experience with Azure Purview , Collibra , or other data governance and catalog tools. Familiarity with Apache Spark , version control systems, containerization (Docker), and orchestration (Kubernetes). Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Main Purpose: • Participate in strategic planning discussions with technical and business customers and is the single point of contact delivery partner for single or multiple sub-systems for global techno-functional product owner. • Architect and design solutions and guide the vendor teams to implement the solution as per the architecture and design. • Be hands-on master developer on the system and coach developers of the vendors. Pair program with new developers on an existing system to build their capability.• Identify gaps in technical design, functional requirements, in the team members and work towards closing those gaps to ensure high quality software is delivered to meet business goals. • Help implement a continuous learning culture within the vendor teams to build their capability the sub-system(s) he or he is leading. Knowledge Skills and Abilities, Key Responsibilities: Technical Skillsets : Several years of hands-on distributed systems development using J2EE application stack, front-to-back messaging infrastructure and Oracle. Preferable with complex financial systems, logistics or complex systems integrations. Proficient in handling the most sophisticated of technical development concepts, latest software tools and technologies, strong database concepts and object oriented designing techniques. Minimum 5-10 years of hands-on coding experience with following technologies Backend: Scala, Java, J2EE, Oracle Messaging technologies: MQ, TIBCO, or similar messaging systems Frontend: React Understands different programming languages and ability to solve problems in coding, testing and deployment. Expert level understanding of object oriented design and development. Experience in troubleshooting complex systems using tools like Splunk, AppDynamics or the likes. Experience : Minimum of 5 years of experience with developing end-to-end complex systems with a multi-national or complex technology driven firm in India. Minimum of 2 years of experience working with outsourced vendor partners is BIG plus. Bachelor’s degree in Engineering or Physics or Mathematics is required. Understanding of Risk system is a MUST. Understanding of Commodities, Logistics, Financing, Accounting or Derivatives is a BIG plus. Competencies : Strong oral and written communications with strong inter-personal skills to collaborate with vendor teams and global IT owners with attention to micro level details. Must be acclimatized working and dealing with client managers / senior management. Strong analytical and problem-solving skills. Strong change management skills, ability to handle several projects simultaneously while working under pressure to meet deadlines. Capable of working in groups as well as independently. Professional management of employee relationships at all levels. Ability to maintain the confidentiality of sensitive information. Great teammate with an enthusiastic approach to fresh challenges Key Responsibilities: Operate as a delivery partner in the 3-in-box operating model and partner with global techno-functional stakeholders and vendor technical teams to deliver strategic business objectives Own the BAU delivery and product support for Risk system. Coach and Mentor the vendor developers for the assigned work stream. Key Relationships and Department Overview: External : Strategic outsourcing partners. Internal : Technical and Functional partners and stakeholders based in UK, Moscow, Geneva, China etc. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Description The Data Engineer will help build and maintain the cloud Data Lake platform leveraging Databricks. Candidates will be expected to contribute to all stages of the data lifecycle including data ingestion, data modeling, data profiling, data quality, data transformation, data movement, and data curation Architect data systems that are resilient to disruptions and failures Ensure high uptime for all data services Bring modern technologies and practices into the system to improve reliability and support rapid scaling of the business’s data needs Scale up our data infrastructure to meet business needs Develop production data pipeline patterns leveraging Provide subject matter expertise and hands-on delivery of data acquisition, curation and consumption pipelines on Azure. Responsible for maintaining current and emerging state-of-the-art computer and cloud-based solutions and technologies. Build effective relationships with internal stakeholders Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog, etc. Hands-on experience implementing analytics solutions leveraging Python, Spark SQL, Databricks Lakehouse Architecture, Kubernetes, Docker All other duties as assigned Qualifications Bachelor's degree in Computer Science, Information Technology, Management Information Systems (MIS), Data Science or related field. Applicable years of experience may be substituted for the degree requirement. Up to 8 years of experience in software engineering Experience with large and complex data projects, preferred Experience with large-scale data warehousing architecture and data modeling, preferred Worked with Cloud-based architecture such as Azure Cloud, preferred Experience working with big data technologies e.g. Snowflake, Redshift, Synapse, Postgres, Airflow, Kafka, Spark, DBT, preferred Experience implementing pub/sub and streaming use cases, preferred Experience in design reviews, preferred Experience influencing a team’s technical and business strategy by making insightful contributions to team priorities and approaches, preferred Working knowledge of relational databases, preferred Expert in SQL and high-level languages such as Python, Java or Scala, preferred Demonstrate the ability to analyze large data sets to identify gaps and inconsistencies in ETL pipeline and provide solutions for pipeline reliability and data quality, preferred Experience in infrastructure as code / CICD development environment, preferred Proven ability to build, manage and foster a team-oriented environment Excellent communication (written and oral) and interpersonal skills Excellent organizational, multi-tasking, and time-management skills Job Engineering Primary Location India-Maharashtra-Mumbai Schedule: Full-time Travel: No Req ID: 244483 Job Hire Type Experienced Not Applicable #BMI N/A Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

13 - 18 Lacs

Ahmedabad

Work from Office

Naukri logo

About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices. Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka. Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry. Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap. Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture. Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership. Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools. Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders. Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data. Qualifications and Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems. Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred. Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles. Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution. Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling. Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation. Strong communication and collaboration skills. Ability to translate complex technical concepts into actionable business strategies. Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities Design, implement, manage and optimize data pipelines in Azure Data Factory as per customer business requirements. Design and develop SparkSQL/PySpark codes in DataBricks. Integrate different services of Azure and External Systems to implement data analytics solutions. Design and develop codes in Azure LogicApp, Azure Functions, Azure SQL, Synapse etc. Implement best practices in ADF / DataBricks / other Azure data engineering services / target databases to maximize job performance, ensure code reusability and minimize implementation and maintenance cost. Ingest structured/semi-structures/unstructured data into ADLS/Blob Storage in batch/near real time/real time from different sources systems including RDBMS's, ERP's, File Systems, Storage Services, API's, Event Producers, NoSQL DB's etc. Develop advanced codes using SQL/Python/Scala/Spark/Data Engineering tools/other query languages to process data as per business requirements. Develop data ingestion, integration and transformation frameworks to ensure best data services. Understand client's business requirements to design data models as per the requirements. Design data warehouses, data lakes and other modern analytics systems to provide batch / near real time / real time data analytics capabilities to customer. Mandatory Skill Sets ADF +Python , Azure data engg Preferred Skill Sets ADF +Python , Azure data engg Years Of Experience Required 3-8 Education Qualification Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Description 👋🏼We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 5+ years. Hands on working experience in Data engineering. Strong working experience in SQL, Python or Scala. Deep understanding of Cloud Design Patterns and their implementation. Experience working with Snowflake as a data warehouse solution. Experience with Power BI data integration. Design, develop, and maintain scalable data pipelines and ETL processes. Work with structured and unstructured data from multiple sources (APIs, databases, flat files, cloud platforms). Strong understanding of data modelling, warehousing (e.g., Star/Snowflake schema), and relational database systems (PostgreSQL, MySQL, etc.) Hands-on experience with ETL tools such as Apache Airflow, Talend, Informatica, or similar. Strong problem-solving skills and a passion for continuous improvement. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client's business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients' requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design documents explaining overall architecture, framework, and high-level design of the application for the developers. Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects. Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on GCP cloud platform, leveraging services. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly GCP, and proficiency in GCP services. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets GCP, Pyspark, Spark Preferred Skill Sets GCP, Pyspark, Spark Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description AWS Fintech team is looking for a Data Engineering Manager to transform and optimize high-scale, world class financial systems that power the global AWS business. The success of these systems will fundamentally impact the profitability and financial reporting for AWS and Amazon. This position will play an integral role in leading programs that impact multiple AWS cost optimization initiatives. These programs will involve multiple development teams across diverse organizations to build sophisticated, highly reliable financial systems. These systems enable routine finance operations as well as machine learning, analytics, and GenAI reporting that enable AWS Finance to optimize profitability and free cash flow. This position requires a proactive, highly organized individual with an aptitude for data-driven decision making, a deep curiosity for learning new systems, and collaborative skills to work with both technical and financial teams. Key job responsibilities Build and lead a team of data engineers, application development engineers, and systems development engineers Drive execution of data engineering programs and projects Help our leadership team make challenging decisions by presenting well-reasoned and data-driven solution proposals and prioritizing recommendations. Identify and execute on opportunities for our organization to move faster in delivering innovations to our customers. This role has oncall responsibilities. A day in the life The successful candidate will build and grow a high-performing data engineering team to transform financial processes at Amazon. The candidate will be curious and interested in the capabilities of Large Language Model-based development tools like Amazon Q to help teams accelerate transformation of systems. The successful candidate will begin with execution to familiarize themselves with the space and then construct a strategic roadmap for the team to innovate. You thrive and succeed in an entrepreneurial environment, and are not hindered by ambiguity or competing priorities. You thrive driving strategic initiatives and also dig in deep to get the job done. About The Team The AWS FinTech team enables the growth of earth’s largest cloud provider by building world-class finance technology solutions for effective decision making. We build scalable long-term solutions that provide transparency into financial business insights while ensuring the highest standards of data quality, consistency, and security. We encourage a culture of experimentation and invest in big ideas and emerging technologies. We are a globally distributed team with software development engineers, data engineers, application developers, technical program managers, and product managers. We invest in providing a safe and welcoming environment where inclusion, acceptance, and individual values are honored. Basic Qualifications Experience managing a data or BI team 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience 5+ years of data engineering experience Experience communicating to senior management and customers verbally and in writing Experience leading and influencing the data or BI strategy of your team or organization Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Preferred Qualifications Knowledge of software development life cycle or agile development environment with emphasis on BI practices Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2961772 Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

We have an immediate opening for Big Data Developer with Encora Innovation Labs in Bangalore. Exp: 4 to 8 Yrs Location : Bangalore (Hybrid) Budget: Not a constraint for right candidate Job Description: Spark and Scala Hive, Hadoop Strong communication skills If interested, please revert with your updated resume and passport size photo along with below mentioned details. Total Exp: Rel Exp: CTC: ECTC: Notice Period (Immediate to 15 Days): Current Location: Preferred Location: Any offers in Han

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Our ecosystem empowers rapid execution with plug-and-play tools, enabling scalable, AI-powered strategies that fast-track your digital transformation. With a focus on automation and seamless integration, we help you stay ahead—letting you focus on your core, while we accelerate your growth Responsibilities & Qualifications Data Architecture Design: Develop and implement scalable and efficient data architectures for batch and real-time data processing.Design and optimize data lakes, warehouses, and marts to support analytical and operational use cases. ETL/ELT Pipelines: Build and maintain robust ETL/ELT pipelines to extract, transform, and load data from diverse sources.Ensure pipelines are highly performant, secure, and resilient to handle large volumes of structured and semi-structured data. Data Quality and Governance: Establish data quality checks, monitoring systems, and governance practices to ensure the integrity, consistency, and security of data assets. Implement data cataloging and lineage tracking for enterprise-wide data transparency. Collaboration with Teams:Work closely with data scientists and analysts to provide accessible, well-structured datasets for model development and reporting. Partner with software engineering teams to integrate data pipelines into applications and services. Cloud Data Solutions: Architect and deploy cloud-based data solutions using platforms like AWS, Azure, or Google Cloud, leveraging services such as S3, BigQuery, Redshift, or Snowflake. Optimize cloud infrastructure costs while maintaining high performance. Data Automation and Workflow Orchestration: Utilize tools like Apache Airflow, n8n, or similar platforms to automate workflows and schedule recurring data jobs. Develop monitoring systems to proactively detect and resolve pipeline failures. Innovation and Leadership: Research and implement emerging data technologies and methodologies to improve team productivity and system efficiency. Mentor junior engineers, fostering a culture of excellence and innovation.| Required Skills:  Experience: 7+ years of overall experience in data engineering roles, with at least 2+ years in a leadership capacity. Proven expertise in designing and deploying large-scale data systems and pipelines. Technical Skills: Proficiency in Python, Java, or Scala for data engineering tasks. Strong SQL skills for querying and optimizing large datasets. Experience with data processing frameworks like Apache Spark, Beam, or Flink. Hands-on experience with ETL tools like Apache NiFi, dbt, or Talend. Experience in pub sub and stream processing using Kafka/Kinesis or the like Cloud Platforms: Expertise in one or more cloud platforms (AWS, Azure, GCP) with a focus on data-related services. Data Modeling: Strong understanding of data modeling techniques (dimensional modeling, star/snowflake schemas). Collaboration: Proven ability to work with cross-functional teams and translate business requirements into technical solutions. Preferred Skills: Familiarity with data visualization tools like Tableau or Power BI to support reporting teams. Knowledge of MLOps pipelines and collaboration with data scientists. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

13 - 23 Lacs

Hyderabad, Pune

Hybrid

Naukri logo

JD – Data Engineer (Spark, Scala, Big Data) Job Details Overview: Overall IT experience of 5+ years of total experience with strong programming skills Excellent skill in Scala language etc. Excellent in Apache Spark, Hadoop, Hive, Cloudera, Kafka

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Job location: Bangalore Total experience: 6 to 8 years Job Description  Languages: Scala/Python 3.x  File System: HDFS  Frameworks: Spark 2.x/3.x (Batch/SQL API), Hadoop, Oozie/Airflow  Databases: HBase, Hive, SQL Server, Teradata  Version Control System: GitHub  Other Tools: Zendesk, JIRA Mandatory Skill Set-Scala/Python Preferred Skill Set-Scala/Python Year of experience required-5+ Qualifications-Btech Education (if blank, degree and/or field of study not specified) Degrees/Field Of Study Required Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description As a member of the Data Platform team, you'll be at the forefront of transforming how Freshworks Datalake can harnessed to the fullest in making data-driven decisions Key job responsibilities: Drive the backbone of our data platform by building robust pipelines that turn complex data into actionable insights using AWS, Databricks platform Be a data detective by ensuring our data is clean, accurate, and trustworthy Write clean, efficient code that handles massive amounts of structured and unstructured data Qualifications Proficient in at least one major language (Scala or Python) and Kafka (any variant). Write elegant and maintainable code, and you need to be comfortable with picking up new technologies. Proficient in working with distributed systems and have experience with different distributed processing frameworks that can handle data in batch and near real-time e.g. Spark etc. Experience on working with various AWS services and Databricks to build end-to-end data solutions that bring different systems together This role will follow IST working hours and may require weekend availability for monitoring and support activities as needed. Requires 8–15 years of experience in a related field. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies