Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 6.0 years
3 - 7 Lacs
Hyderabad
Work from Office
What you will do As a Business Intelligence Engineer, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable solutions and responding to requests for rapid releases of analytical outcomes. Design, develop, and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos, others). Analyse datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Partner with leaders and stakeholders across Finance, Sales, Customer Success, Marketing, Product, and other departments to understand their data and reporting requirements. Stay abreast of the latest trends and technologies in business intelligence and data analytics, inclusive of AI use in BI. Elicit and document clear and comprehensive business requirements for BI solutions, translating business needs into technical specifications and solutions. Collaborate with Data Engineers to ensure efficient up-system transformations and create data models/views that will hydrate accurate and reliable BI reporting. Contribute to data quality and governance efforts to ensure the accuracy and consistency of BI data. What we expect of you Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets AWS Developer certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte’s 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com Our products are built on Windows .NET and SQL Server and managed in AWS. Our web Ux stack is built on jQuery and some areas use AngularJS. Our middle tier is in C# and we build our infrastructure on an extensive set of Restful APIs. We build native iOS and Android apps, and are starting to experiment with Flutter and Dart. For select infrastructure components we use Python extensively, and use Tableau for analytics dashboards. We use Redshift, Aurora, Redis Elasticache, Lambda, and other AWS and Azure products to build and manage our complete service, moving towards serverless components. We deal with billions of API calls, millions of records in databases, and terabytes of data to be managed with all services we build that have to run 24x7 at 99.99% availability. What will I be doing? Design, develop, test, release and maintain components of Zenoti Collaborate with a team of PM, DEV, and QA to release features Work in a team following agile development practices (SCRUM) Build usable software, released at high quality, runs at scale and is adopted by customers Learn to scale your features to handle 2x ~ 4x growth every year and manage code that has to deal with millions of records, and terabytes of data Release new features into production every month, and get real feedback from thousands of customers to refine your designs Be proud of what you work on, obsess about the quality of the work you produce What skills do I need? 5+ years experience of working on /iOS/Android to build mobile apps. 1+ years of experience in Flutter Strong experience in Swift/Java/kotlin. Experience in creating mobile app workflows, storyboards, user flows. Proven experience in writing readable code, creating extensive documentation for existing code and refactoring previously written code. Experience working in an Agile/Scrum development process. Experience with third-party libraries and APIs. Strong and demonstrated ability to design modules for Mobile applications. Strong logical, analytical, and problem-solving skills Excellent communication skills Can work in a fast-paced, ever-changing startup environment Benefits Attractive Compensation Comprehensive medical coverage for yourself and your immediate family An environment where wellbeing is high on priority – access to regular yoga, meditation, breathwork, nutrition counseling, stress management, inclusion of family for most benefit awareness building sessions Opportunities to be a part of a community and give back: Social activities are part of our culture; You can look forward to regular engagement, social work, community give-back initiatives Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte’s 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com Our products are built on Windows .NET and SQL Server and managed in AWS. Our web Ux stack is built on jQuery, and we use AngularJS. Our middle tier is in C#, and we build our infrastructure on an extensive set of Restful APIs. We build native iOS and Android apps using Flutter and Dart. Our platform infrastructure is built in .NET Core and deployed on RHEL Enterprise Linux using Docker and Kubernetes. We use Python extensively for data processing workloads and Tableau for analytics dashboards for select infrastructure components. We use Redshift, Aurora, Redis Elasticache, Lambda, and other AWS and Azure products to build and manage our complete service, moving towards serverless components. We deal with billions of API calls, millions of records in databases, and terabytes of data to be managed with all services we build that have to run 24x7 at 99.99% availability. What will I be doing? Design, develop, test, release and maintain components of Zenoti Collaborate with a team of PM, DEV, and QA to release features Work in a team following agile development practices (SCRUM) Build usable software, released at high quality, runs at scale and is adopted by customers Learn to scale your features to handle 2x ~ 4x growth every year and manage code that has to deal with millions of records, and terabytes of data Release new features into production every month, and get real feedback from thousands of customers to refine your designs What skills do I need? Knowledge in designing and developing applications on the Microsoft stack Strong knowledge in building web applications Strong knowledge in HTML, JavaScript, CSS, jQuery, .NET/IIS with C# Proficiency in Microsoft SQL Server Knowledge in developing web applications using Angular/Flutter/Dart a plus Strong logical, analytical, and problem-solving skills Excellent communication skills Can work in a fast-paced, ever-changing, startup environment Why Zenoti? Be part of an innovative company that is revolutionizing the wellness and beauty industry. Work with a dynamic and diverse team that values collaboration, creativity, and growth. Opportunity to lead impactful projects and help shape the global success of Zenoti’s platform. Attractive compensation. Medical coverage for yourself and your immediate family. Access to regular yoga, meditation, breathwork, and stress management sessions. We also include your family in benefit awareness initiatives. Regular social activities, and opportunities to give back through social work and community initiatives. Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We seek a forward-thinking, AI-first Data Analyst who leverages AI tools to transform data into actionable business insights. The ideal candidate will have a strong foundation in data analysis combined with an innovative mindset to solve complex problems using AI technologies. You'll replace traditional manual processes with AI-powered solutions, working with tools like Redshift, Zoho Analytics, SQL, and QuickSight to drive data-driven decision-making across the organization. Key Responsibilities AI-Powered Analysis: Utilize AI tools (Claude, Llama, Grok, Copilot, etc.) to analyse large datasets, generate insights, and automate repetitive analytical tasks Intelligent Dashboard Development: Create and maintain dynamic dashboards and reports using AI-assisted development, leveraging Redshift, QuickSight, and Zoho Analytics AI-Enhanced Problem Solving: Tackle complex analytical challenges by combining domain expertise with AI capabilities to solve problems previously considered unsolvable Automated Data Pipeline Management: Design and optimise data pipelines using AI tools for code generation, debugging, and performance enhancement Advanced Analytics: Conduct sophisticated cohort analysis, and predictive modeling using AI-powered statistical tools Intelligent Reporting: Generate comprehensive insights on customer behavior and business metrics using AI for pattern recognition and trend analysis Innovation Leadership: Continuously explore and implement new AI tools and methodologies to enhance analytical capabilities Key Requirements Bachelor's degree in Data Science, Statistics, Computer Science, Economics, or related field 2+ years of experience in data analysis with demonstrated strong AI tool proficiency Strong AI-first mindset - Must be proficient in leveraging AI APIs (ChatGPT, Claude, GitHub Copilot, etc.) for data analysis, code generation, and problem-solving Proven track record of using AI to solve analytical problems that couldn't be solved through traditional methods Advanced SQL skills with experience in Redshift, QuickSight, and Zoho Analytics Experience with AI-assisted coding and automation tools Strong analytical thinking combined with creative problem-solving using AI Excellent communication skills to present AI-generated insights to stakeholders Self-motivated learner who stays current with AI developments in data analytics Nice To Have Experience with machine learning platforms and AI/ML model deployment Knowledge of advanced prompt engineering and AI workflow optimization Experience in SaaS or product-driven companies Background in implementing AI solutions for business intelligence What Sets You Apart: You don't just use AI as a helper—you think AI-first. You approach every analytical challenge by first considering how AI can enhance, automate, or completely revolutionize the solution. You're excited about pushing the boundaries of what's possible in data analysis through intelligent tool usage. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Supply Chain/Forecasting/Financial Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Strong supply chain domain knowledge (inventory planning, demand forecasting, logistics) Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Experience using at least one major cloud platform (AWS, Azure, GCP), such as: AWS: Experience with AWS SageMaker, Redshift, Glue, Lambda, QuickSight Azure: Experience with Azure ML Studio, Synapse Analytics, Data Factory, Power BI GCP: Experience with BigQuery, Vertex AI, Dataflow, Cloud Composer, Looker Experience developing, deploying, and monitoring ML models on cloud infrastructure Expertise in Python, SQL, data orchestration, and cloud-native data tools Hands-on experience with cloud-native data lakes and lakehouses (e.g., Delta Lake, BigLake) Familiarity with infrastructure-as-code (Terraform/CDK) for cloud provisioning Knowledge of visualization tools (PowerBI, Tableau, Looker) integrated with cloud backends Strong command of statistical modeling, testing, and inference Advanced capabilities in data wrangling, transformation, and feature engineering Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Airflow) Strong communication and stakeholder engagement skills at the executive level Roles And Responsibilities Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Software Engineering –Software Engineer Job Summary We are seeking an experienced Lead Software Engineer (Tech Lead) with 8+ years of professional experience to lead design and development of high-quality software solutions using industry best practices within our engineering team. The ideal candidate will be responsible for developing clean, efficient, and scalable code in React/Java adhering with the design principles ensuring the code quality meets the industry standards. This role is responsible for defining and executing the technical roadmap for SmartFM product. SmartFM is an advanced facility management platform designed to ingest data from multiple sources, process it efficiently, and provide a unified view of operational matrices. With built-in AI/ML algorithms, the platform offers analytics-driven insights, raises alerts and alarms, and recommends actions to optimize building operations. The individual will serve as a technology leader, fostering the delivery of world-class product solutions to deliver the highest value to the business. The successful individual will utilize strong leadership skills and deep expertise, while exhibiting learning agility to promote open-source solutions. Roles And Responsibilities Design and development of advanced technical systems, ensuring robust, scalable, and innovative solutions while collaborating seamlessly across diverse teams ensuring the timely and effective delivery of high-quality technical solutions. Development aligned with designs, adhering to coding best practices, and integrating automated testing to ensure high code coverage and minimize defects Thrives in collaborative environments, contributing positively to group dynamics. Oversees the management and performance of production systems and databases, ensuring reliability, scalability, and uninterrupted operation. Diagnoses and resolves complex technical, data, and software challenges, guiding troubleshooting efforts to maintain and enhance product performance in dynamic environments. Assesses emerging tools, platforms, and technologies, advocating for innovation and implementing best practices to improve system capabilities and efficiency. Required Technical Skills And Experience Proficient in application development languages and frameworks such as React, Node.js, Nest.js, C#, MS SQL, and T-SQL Skilled in utilizing DEVOPS tools, including Azure DevOps (ADO), Jenkins, Maven, Gradle, Docker, and Azure Pipelines. Expertise in programming languages like Java/Spring, .Net, AngularJS, Python, Go, and Swift, both on cloud platforms and devices. Extensive experience with Azure and AWS cloud technologies, including designing and deploying globally accessible, scalable applications. Strong background in mobile application development. Advanced knowledge of Agile methodologies for application development. Demonstrated expertise in application modernization by transitioning legacy systems to contemporary architectures, such as microservices, APIs, and always-on designs, with an emphasis on leveraging cloud technologies. Proven hands-on experience with MongoDB Experience in developing, maintaining, and optimizing scalable data pipelines Experience with data engineering technologies such as Hadoop, Spark and Kafka. Experience working with data warehouses tools like Redshift, BigQuery, or Snowflake. Additional Qualifications Proven expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. Strong problem solving and analytical skills. Experience with IBM StreamSets Experience in collaborating and communicating seamlessly with diverse technology roles, including development, support, product management, and systems administration. Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in software development and business needs. Education Requirements / Experience Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in computer science, information systems, mathematics or related field Show more Show less
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
A career in our Advisory Acceleration Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. Years of Experience: Candidates with 4-8 years of hands on experience Position Requirements Must Have : Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions Strong expertise in end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake, Data hub in AWS Proficient in Lambda or Kappa Architectures Should be aware of Data Management concepts and Data Modelling Strong AWS hands-on expertise with a programming background preferably Python/Scala Good knowledge of Big Data frameworks and related technologies - Experience in Hadoop and Spark is mandatory Strong experience in AWS compute services like AWS EMR, Glue and storage services like S3, Redshift & Dynamodb Good experience with any one of the AWS Streaming Services like AWS Kinesis, AWS SQS and AWS MSK Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql and Spark Streaming Strong understanding of DBT ELT Tool, and usage of DBT macros etc Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build and Code Commit Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Good knowledge in AWS Security and AWS Key management Strong understanding of Cloud data migration processes, methods and project lifecycle Good analytical & problem-solving skills Good communication and presentation skills Education : Any Graduate. Good analytical & problem-solving skills Good communication and presentation skills Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description At Amazon, we strive to be the most innovative and customer centric company on the planet. Come work with us to develop innovative products, tools and research driven solutions in a fast-paced environment by collaborating with smart and passionate leaders, program managers and software developers. This role is based out of our Bangalore corporate office and is for an passionate, dynamic, analytical, innovative, hands-on, and customer-centric Business analyst. Key job responsibilities This role primarily focuses on deep-dives, creating dashboards for the business, working with different teams to develop and track metrics and bridges. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs In-depth research of drivers of the Localization business Analyze key metrics to uncover trends and root causes of issues Suggest and build new metrics and analysis that enable better perspective on business Capture the right metrics to influence stakeholders and measure success Develop domain expertise and apply to operational problems to find solution Work across teams with different stakeholders to prioritize and deliver data and reporting Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Basic Qualifications 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using Advanced SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A2983100 Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As a Data Engineer you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Key job responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2983400 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
JOB DESCRIPTION About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Data Engineer – AWS Role & REsPonsibilty Evaluating, developing, maintaining and testing data engineering solutions for Data Lake and advanced analytics projects. Implement processes and logic to extract, transform, and distribute data across one or more data stores from a wide variety of sources Distil business requirements and translate into technical solutions for data systems including data warehouses, cubes, marts, lakes, ETL integrations, BI tools or other components. Creation and support of data pipelines built on AWS technologies including Glue, Redshift, EMR, Kinesis and Athena Participate in deep architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the AWS platform. Optimize data integration platform to provide optimal performance under increasing data volumes Support the data architecture and data governance function to continually expand their capabilities Experience in development of Solution Architecture for Enterprise Data Lakes (applicable for AM/Manager level candidates) Should have exposure to client facing roles Strong communication, inter-personal and team management skills THE INDIVIDUAL Proficient in any object-oriented/ functional scripting languages: Pyspark, Python etc. Experience in using AWS SDKs for creating data pipelines – ingestion, processing and orchestration. Hands on experience in working with big data on AWS environment including cleaning/transforming/cataloguing/mapping etc. Good understanding of AWS components, storage (S3) & compute services (EC2) Hands on experience in AWS managed services (Redshift, Lambda, Athena) and ETL (Glue). Experience in migrating data from on-premise sources (e.g. Oracle, API-based, data extracts) into AWS storage (S3). Experience in setup of data warehouse using Amazon Redshift, creating Redshift clusters and perform data analysis queries. Experience in ETL and data modelling on AWS ecosystem components - AWS Glue, Redshift, DynamoDB. Experience in setting up AWS Glue to prepare data for analysis through automated ETL processes. Familiarity with AWS data migration tools such as AWS DMS, Amazon EMR, and AWS Data Pipeline. Hands on experience with AWS CLI, Linux tools and shell scripts. Certifications on AWS will be an added plus. QUALIFICATIONS B.Tech/M.Tech/MCA
Posted 1 week ago
9.0 years
0 Lacs
Hyderābād
On-site
We are looking for an experienced and hands-on Manager – Snowflake Data Engineering to lead and expand our data engineering capabilities. This role is ideal for a technically strong leader who can design scalable data architectures, manage a high-performing team, and collaborate cross-functionally to deliver reliable and secure data solutions. Responsibilities: Design and implement robust Snowflake data warehouse architectures and ETL pipelines to support business intelligence and advanced analytics use cases. Lead and mentor a team of data engineers, ensuring high-quality and timely delivery of projects. Collaborate closely with data analysts, data scientists, and business stakeholders to understand data needs and design effective data models. Develop, document, and enforce best practices for Snowflake architecture, data modeling, performance optimization, and ETL processes. Own the optimization of Snowflake environments to ensure low-latency and high-availability data access. Drive process improvements, evaluate emerging tools, and continuously enhance our data engineering infrastructure. Ensure data pipelines are built with high levels of accuracy, completeness, and security, in compliance with data privacy regulations (GDPR, CCPA, etc.). Partner with cloud engineering and DevOps teams to integrate data solutions seamlessly within the AWS ecosystem. Participate in capacity planning, budgeting, and resource allocation for the data engineering function. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. 9+ years of overall experience in cloud-based data engineering, with at least 4 years of hands-on experience in Snowflake. Proven track record in designing, deploying, and managing Snowflake data platforms at scale. Expertise in AWS services such as S3, Redshift, Lambda, Glue, etc. Strong command of SQL, Python, and other data manipulation languages. Experience managing and mentoring a team of engineers and leading cross-functional initiatives. Strong understanding of data governance, data security, and compliance frameworks (e.g., GDPR, CCPA). Excellent problem-solving and communication skills. Additional Information Master’s degree in Computer Science or a related discipline. Snowflake certification(s). Experience with additional cloud data warehouses (e.g., AWS Redshift, Google BigQuery). Familiarity with Agile methodologies and modern development practices (CI/CD, Git, Jira, etc.). Job Type: Full-time Schedule: Day shift Experience: Overall: 9 years (Required) Snowflake: 4 years (Required) Work Location: In person Speak with the employer +91 9959381537
Posted 1 week ago
70.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
hackajob is collaborating with Zipcar to connect them with exceptional tech professionals for this role. Who are we? Glad you asked! Avis Budget Group is a leading provider of mobility options, with brands including Avis, Budget & Budget Truck, and Zipcar. With more than 70 years of experience and 11,000 locations in 180 countries, we are shaping the future of our industry and want you to join us in our mission. Zipcar is the world’s leading car-sharing network, found in urban areas and university campuses in more than 500 cities and towns. Our team is smart, creative and fun, and we’re driven by a mission - to enable simple and responsible urban living Apply today to get connected to an exciting career, a supportive family of employees and a world of opportunities. What is ABG’s strategy in India? At our India Build Center, you will play a key role in driving the digital transformation narrative of ABG. Being at the core of ABG’s growth strategy, we will develop technology-led offerings that would position Avis and its brands as the best vehicle rental company in the world. Our goal is to create the future of customer experience through technology. The India Build Center is based in Bengaluru, India . We are currently located at WeWork Kalyani Roshni Tech Hub in Marathahalli on Outer Ring Road , strategically located close to product companies and multiple tech-parks like Embassy Tech Park, ETV, RMZ Ecospace, Kalyani Tech Park, EPIP Zone and ITPL among others. The Fine Print We encourage Zipsters to bring their whole selves to work - unique perspectives, personal experiences, backgrounds, and however they identify. We are proud to be an equal opportunity employer - M/F/D/V. This document does not constitute a promise or guarantee of employment. This document describes the general nature and level of this position only. Essential functions and responsibilities may change as business needs require. This position may be with any affiliate of Avis Budget Group. Data Engineer/SDE 3 (Data Engineering) Location: Bengaluru, India | 100% on-site The Impact You’ll Make We are looking for a talented and passionate senior engineer to lead the way on the development and maintenance of Zipcar’s core platform services. These are the underlying services that support our car sharing mobile and web ecommerce products - the primary driver of $9B in annual revenue. This role requires a resourceful individual, a persistent problem solver, and a strong hands-on engineer. This is a great opportunity to have a big impact as part of a growing team in the midst of technology and product transformation. Watch our talk at a recent AWS Re: Invent conference here . What You’ll Do Build a deep understanding of existing systems. Participate in or lead design reviews with peers and stakeholders. Develop robust, testable code that meets design requirements. Review code developed by other developers, providing feedback on style, functional correctness, testability, and efficiency. Triage system-wide issues and identify root cause of incidents. Can work independently and can participate/contribute to architecture discussions. Identify and resolve existing critical technical debt. Build transparent systems with proper monitoring, observability, and alerting. Plan for robust build, test, and deployment automation Work with product stakeholders and front-end developers to understand the essence of requirements and to provide pragmatic solutions Work within an Agile framework What We’re Looking For 3-5 years of Professional experience designing/building/maintaining highly available data and analytics platform. 3+ years of experience in data engineering, with a focus on building large-scale data processing systems. Hands-on experience with AWS or similar cloud platform building data engineering solutions for analytics and science. (2+ years) Must have experience building complex data pipelines - batch and/or real time event-based processing (2+ years) Strong experience in designing, building and maintaining data warehouse in Redshift or similar cloud-based solutions. (2+ years) Experience in Matillion or similar ETL/ELT tool for developing data ingestion and curation flow (2+ years) Must have strong hands-on experience in SQL. (2+ years) Strong hands-on experience in modern scripting languages using Python. (2+ years) Experience building complex ETL using Spark (Scala or Python) for event based big data processing (1+ years) Strong hands-on experience with NoSQL DBs - MongoDB, Cassandra or DynamoDB (1+ years) Strong experience with AWS deployment using CI/CD pipeline is preferred. (1+ years) Experience in infrastructure as a code services like Terraform preferred. (1+ years) Experience building mission critical systems, running 24x7. Desire to work within a team of engineers at all levels of experience. Desire to mentor junior developers, maximizing their productivity. Good written and spoken communication skills. Show more Show less
Posted 1 week ago
2.0 years
4 - 8 Lacs
Gurgaon
On-site
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. About the role:- This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including development and testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a Software Engineer with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Responsibilities:- System Deployment: Build new features in the existing backend processing pipelines. CI/CD Implementation: Leverage CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Adhere to coding standards, best practices, and design principles. Participate in code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Team Collaboration: Follow best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills:- Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 2 years, in Java development expertise and scripting languages such as Python in an AWS Cloud environment. Good experience with SQL and a database system such as Postgres. Good understanding of CI/CD principles and tools. GitLab a plus. Good problem-solving and debugging skills. Good communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions. Utilizes team collaboration to contribute to innovative solutions efficiently Other desirable skills:- Knowledge of networking principles and security best practices. AWS certifications. Experience with Data Warehouses, ETL, and/or Data Lakes very helpful. Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law.
Posted 1 week ago
40.0 years
0 Lacs
Greater Kolkata Area
Remote
Who We Are Escalent is an award-winning data analytics and advisory firm that helps clients understand human and market behaviors to navigate disruption. As catalysts of progress for more than 40 years, our strategies guide the world’s leading brands. We accelerate growth by creating a seamless flow between primary, secondary, syndicated, and internal business data, providing consulting and advisory services from insights through implementation. Based on a profound understanding of what drives human beings and markets, we identify actions that build brands, enhance customer experiences, inspire product innovation and boost business productivity. We listen, learn, question, discover, innovate, and deliver—for each other and our clients—to make the world work better for people. Why Escalent? Once you join our team you will have the opportunity to... Access experts across industries for maximum learning opportunities including Weekly Knowledge Sharing Sessions, LinkedIn Learning, and more. Gain exposure to a rich variety of research techniques from knowledgeable professionals. Enjoy a remote first/hybrid work environment with a flexible schedule. Obtain insights into the needs and challenges of your clients—to learn how the world’s leading brands use research. Experience peace of mind working for a company with a commitment to conducting research ethically. Build lasting relationships with fun colleagues in a culture that values each person. Role Overview: We are looking for a Data Engineer to design, build, and optimize scalable data pipelines and infrastructure that power analytics, machine learning, and business intelligence. You will work closely with data scientists, analysts, and software engineers to ensure efficient data ingestion, transformation, and management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to extract, transform, and load data from diverse sources Build and optimize data storage solutions using SQL and NoSQL databases, data lakes, and cloud warehouses (Snowflake, BigQuery, Redshift) Ensure data quality, integrity, and security through automated validation, governance, and monitoring frameworks Collaborate with data scientists and analysts to provide clean, structured, and accessible data for reporting and AI/ML models Implement best practices for performance tuning, indexing, and query optimization to handle large-scale datasets Stay updated with emerging data engineering technologies, architectures, and industry best practices Write clean and structured code as defined in the team’s coding standards and creating documentation for best practices Stay updated with emerging technologies, frameworks, and industry trends Required Skills: Strong proficiency in Python, SQL, and data processing frameworks (Pandas, Spark, Hadoop) Experience with cloud-based data platforms (AWS, Azure, GCP) and services like S3, Glue, Athena, Data Factory, or BigQuery Solid understanding of database design, data modeling and warehouse architectures Hands-on experience with ETL/ELT pipelines and workflow orchestration tools (Apache Airflow, Prefect, Luigi) Knowledge of APIs, RESTful services and integrating multiple data sources Strong problem-solving and debugging skills in handling large-scale data processing challenges Experience with version control systems (Git, GitHub, GitLab) Ability to work in a team setting Organizational and time management skills Desirable skills: Experience working with Agile development methodologies Experience in building self-service data platforms for business users and analysts Effective skills in written and verbal communication Show more Show less
Posted 1 week ago
1.0 years
8 - 10 Lacs
Bengaluru
On-site
- Bachelor's degree in mathematics, engineering, statistics, computer science or a related field - 1+ years of business analysis (dealing with large complex data) experience - Demonstrated ability with Data-warehousing, database administrator roles, database migration - Strong experience in dash-boarding using Tableue/PowerBI/Excel/PowerPivots - Strong communication skill and team player - Demonstrated ability to manage and prioritize workload Amazon is seeking a Business Analyst to join the Abuse Prevention vertical of India Returns team. The team's vertical is focused on eliminating abuse and misuse associated with customer returns and rejects, thereby improving the India business P&L. This role would drive analytics support for product managers and business intelligence engineers for key strategic priorities. The position represents an exciting opportunity to be a part of a high-paced environment. The ideal candidate will be detail-oriented, well-versed with SQL (Python is a plus), and driven to provide insightful and timely data-based insights. The candidate should have strong analytical and communication skills. There is always room to make things better so this candidate should also have the ability to invent and simplify. Lastly, the candidate should have an ability to work effectively with cross-functional teams. Key job responsibilities Key job responsibilities This person will own the production and delivery of suite of analytics reports and dashboards used by the team to make key business decisions. This will involve: 1. Build the data structure, transformation processes, load jobs in Redshift and data processing and presentation in Excel 2. Debugging report issues and unblocking workflows 3. Communicating with the Product team and customers to provide status updates. 4. Publishing detailed automated dashboards 5. Creating the report requires extracting & transforming data from tables and loading it into tables with an optimized data structure Knowledge of scripting for automation (e.g., VB Script, Python, Perl, Ruby) is a plus Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
2.0 - 5.0 years
4 - 6 Lacs
Bengaluru
On-site
Quicksight Analyst- R01550885 Senior Lead BI Engineer Primary Skills Athena, SNS, SQS, CloudWatch, Kinesis, Redshift Job requirements Business Intelligence (BI) professional with hands-on experience in Amazon QuickSight and dashboard development. The ideal candidate will be responsible for transforming data into meaningful insights and intuitive visualizations to support data-driven decision-making across the organization. 2–5 years of experience in Business Intelligence, Data Analysis, or a similar role . Key Responsibilities: Design, develop, and maintain interactive dashboards and reports using Amazon QuickSight. Work with business stakeholders to gather reporting requirements and translate them into BI solutions. Perform data analysis and validation to ensure accuracy and consistency in reporting. Collaborate with data engineers and analysts to integrate data sources and optimize performance. Create and manage datasets, calculated fields, filters, and visual elements in QuickSight. Support ad-hoc reporting requests and data exploration as needed. Document reporting solutions and ensure best practices in BI development and data governance.
Posted 1 week ago
0 years
5 - 9 Lacs
Bengaluru
On-site
SQL Support (L2) - R01550303 Lead Data Engineer Primary Skills Athena, SNS, SQS, CloudWatch, Kinesis, Redshift Job requirements Monitor/track multiple sources for issues being reported – ServiceNow/Slack/Email/Calls • Create and track all incidents/production issues in Service Now with the right categorization • For all ServiceNow incidents, analyze the inputs from helpdesk and assign them to the right application queue. • Manage the queue assignments of incidents based on status and change of context. • Perform actions based on SOPs to analyze/resolve the issue. • Identify the root cause or source of issue as possible to make L3 analysis easier. • Create the summary of actions done, possible next steps and handover to L3 team changing the assignment. • Communicate with the end user on updates and gather additional data and feedback. • Validate the fix provided by L3, update incidents as required and involve L4/L5 as required. • Report the trend and pattern of issues to deed product backlogs and create SOPs on identified issues and resolutions for future consumption. • Provide on-call support for any critical issues during the weekend
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
India
Remote
Python JD: Role Summary: We are seeking a skilled Python Developer with strong experience in data engineering, distributed computing, and cloud-native API development. The ideal candidate will have hands-on expertise in Apache Spark, Pandas, and workflow orchestration using Airflow or similar tools, along with deep familiarity with AWS cloud services. You’ll work with cross-functional teams to build, deploy, and manage high-performance data pipelines, APIs, and ML integrations. Key Responsibilities: Develop scalable and reliable data pipelines using PySpark and Pandas. Orchestrate data workflows using Apache Airflow or similar tools (e.g., Prefect, Dagster, AWS Step Functions). Design, build, and maintain RESTful and GraphQL APIs that support backend systems and integrations. Collaborate with data scientists to deploy machine learning models into production. Build cloud-native solutions on AWS, leveraging services like S3, Glue, Lambda, EMR, RDS, and ECS. Support microservices architecture with containerized deployments using Docker and Kubernetes. Implement CI/CD pipelines and maintain version-controlled, production-ready code. Required Qualifications: 3–5 years of experience in Python programming with a focus on data processing. Expertise in Apache Spark (PySpark) and Pandas for large-scale data transformations. Experience with workflow orchestration using Airflow or similar platforms. Solid background in API development (RESTful and GraphQL) and microservices integration. Proven hands-on experience with AWS cloud services and cloud-native architectures. Familiarity with containerization (Docker) and CI/CD tools (GitHub Actions, CodeBuild, etc.). Excellent communication and cross-functional collaboration skills. Preferred Skills: Exposure to infrastructure as code (IaC) tools like Terraform or CloudFormation. Experience with data lake/warehouse technologies such as Redshift, Athena, or Snowflake. Knowledge of data security best practices, IAM role management, and encryption. Familiarity with monitoring/logging tools like Datadog, CloudWatch, or Prometheus. Pyspark, Pandas, Data Transformation or Workflow experience is a MUST atleast 2 years Pay: Attractive Salary Interested candidate can call or whats app the resume @ 9092626364 Job Type: Full-time Benefits: Cell phone reimbursement Work from home Schedule: Day shift Weekend availability Work Location: In person
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About AkzoNobel Since 1792, we’ve been supplying the innovative paints and coatings that help to color people’s lives and protect what matters most. Our world class portfolio of brands – including Dulux, International, Sikkens and Interpon – is trusted by customers around the globe. We’re active in more than 150 countries and use our expertise to sustain and enhance the fabric of everyday life. Because we believe every surface is an opportunity. It’s what you’d expect from a pioneering and long-established paints company that’s dedicated to providing sustainable solutions and preserving the best of what we have today – while creating an even better tomorrow. Let’s paint the future together. For more information please visit www.akzonobel.com © 2024 Akzo Nobel N.V. All rights reserved. Job Purpose POSITION OVERVIEW: The Sr. visualization engineer - is responsible for developing, maintaining, and leading the data modelling and visualization for Global SC Analytics solutions to improve performance and achieve the Supply Chain of the future vision. It will be based in GBS – Pune organization & reports to the Supply Chain Advanced Analytics Portfolio Lead which is part of the AkzoNobel global IT deliver organization, which aims to use data-driven techniques and methodologies to uncover insights, support decision-making, and drive business strategies. You will be partnering with Supply Chain COE teams, business teams to understand the supply chain needs, collaborate with the IT organization to develop analytical reports. You will also be responsible for delivering of the required solutions used for descriptive, diagnostic, predictive & prescriptive analytics of entire Supply Chain. Key Activities Requirement Gathering : Work with Supply Chain – BPMs, Product owners (IBP , Demand planning, Supply Planning, Customer ops, Logistics, S&OE, Inventory, Network strategy) to gather, understand & document technical requirements Discuss & align with enterprise solution architects on design & implementations of Data solutions using Azure Data platform and Microsoft PowerBI. Solution design & development: Design and Develop Visualizations : Create visually compelling and interactive data visualizations using modern tools and frameworks (e.g., Tableau, Power BI, Databricks, etc.) to support data-driven decisions across the organization. Semantic model development: Able to define semantic models that optimize visualization performances. Data engineering experiences (preferably within Databricks) is a pre. Data Storytelling : Transform complex data into clear, accessible insights through data storytelling. Ensure that visualizations effectively communicate key messages and trends. Optimization : Focus on ensuring efficient data visualization performance, especially when dealing with large datasets. Optimize dashboards for speed, reliability, and usability. Experiences with DAX studio is pre. Mentorship : Lead and mentor team members in visualization best practices, provide technical leadership and guidance. Tool and Technology Innovation : Stay up-to-date with the latest trends in data visualization tools, frameworks, and technologies. Advocate for new tools and methods to improve visualization quality and efficiency. Dashboard and Report Creation : Design and maintain intuitive, interactive dashboards that provide actionable insights to business stakeholders. Continuous Improvement : Proactively identify opportunities for improvement and optimization in visualization processes, UI/UX, and data interpretation. Experience Experience : Minimum 5-7 years of experience in data visualization, proven track record of designing and implementing high-quality, interactive dashboards and reports. Technical Skills : Expertise in visualization tools such as Power BI, Tableau, D3.js, or similar data visualization platforms. Dax studio Strong knowledge of data manipulation and transformation using SQL, Python, R, or similar languages. Advanced knowledge of data visualization principles and best practices. Experiences with DAX studio. Experience with front-end web technologies (JavaScript, HTML, CSS) for custom visualizations is a plus. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Experience with version control tools (Git). Business Acumen : Ability to understand business requirements and translate them into effective visual solutions. Education Education : Bachelor’s degree in Computer Science, Engineering, Data Science, Information Systems, or a related field. A Master’s degree is a plus. Preferred Qualifications: Certifications : Tableau Certified Professional, Power BI Certification, or similar. Experience with Data Warehouses : Familiarity with integrating visualizations with large-scale data storage solutions (e.g., Redshift, Snowflake). Experience in Agile Environments : Comfortable working in Agile teams and managing priorities in fast-paced, iterative development cycles. At AkzoNobel we are highly committed to ensuring an inclusive and respectful workplace where all employees can be their best self. We strive to embrace diversity in a context of tolerance. Our talent acquisition process plays an integral part in this journey, as setting the foundations for a diverse environment. For this reason we train and educate on the implications of our Unconscious Bias in order for our TA and hiring managers to be mindful of them and take corrective actions when applicable. In our organization, all qualified applicants receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age or disability. Requisition ID: 45700 Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Noida
Remote
Eightfold was founded with a vision to solve for employment in our society. For decades, the connection between individuals and opportunities has been based on who they are and their network's strength vs. their potential. Eightfold leverages artificial intelligence to transform how to think about skills and capabilities for individuals and how jobs and career decisions are made. Eightfold offers the industry’s first AI-powered Talent Intelligence Platform to transform how organizations plan, hire, develop and retain a diverse workforce, enabling individuals to transform their careers. To date, Eightfold AI has received more than $410 million in funding and a valuation of over $2B from leading investors to further our mission of finding the right career for everyone in the world. If you are passionate about solving one of the most fundamental challenges of our society - employment, working on hard business problems, and being part of an amazing growth story - Eightfold is the place to be! Our customer stories- https://eightfold.ai/customers/customer-stories/ Press- https://eightfold.ai/about/press About the role We are looking for a Data Engineer II, Analytics to join our growing team and help build scalable, high-performance data pipelines that enable meaningful business insights. You’ll work with modern data tools, collaborate with cross-functional teams, and contribute to building a robust data foundation that supports Eightfold’s AI-driven analytics and reporting needs. What You Will Learn To Do Design & Develop Pipelines : Build, maintain, and optimize reliable ETL/ELT pipelines to ingest, process, and transform large datasets from diverse sources using Databricks and Amazon Redshift. Data Modeling & Architecture : Support the design and implementation of scalable data models and architectures to meet evolving analytics and business intelligence needs. Data Quality & Integration : Ensure accuracy, consistency, and quality of integrated data from structured and unstructured sources across systems. Performance Tuning : Optimize the performance of queries, pipelines, and databases to ensure high efficiency and reliability for analytics workloads. Collaboration & Delivery : Partner with analytics engineers, data scientists, product managers, and business stakeholders to deliver high-quality data solutions that enable business insights and product innovations. Documentation & Best Practices : Contribute to documentation, promote data engineering best practices, and ensure data governance, security, and compliance standards are upheld. What We Need: Experience : 3–6 years of hands-on experience as a Data Engineer or in a similar data engineering role, ideally in analytics-focused environments. Databricks : Practical experience building and managing pipelines and workflows using Databricks. Amazon Redshift : Strong understanding of Amazon Redshift, including data modeling and query optimization. Programming : Proficiency in SQL and working knowledge of Python or Scala for data processing tasks. ETL/ELT Tools : Hands-on experience developing and maintaining ETL/ELT processes. Big Data Tools (Good to Have) : Familiarity with Apache Spark, Hadoop, Kafka, or other big data technologies is a plus. Analytical & Problem-Solving Skills : Ability to troubleshoot data issues and optimize performance effectively. Communication & Collaboration : Strong communication skills with a collaborative mindset in fast-paced environments. Hybrid Work @ Eightfold: We embrace a hybrid work model that aims to boost collaboration, enhance our culture, and drive innovation through a blend of remote and in-person work. We are committed to creating a dynamic and flexible work environment that nurtures the collaborative spirit of our team. Starting February 1, 2024, our employees will return to the office twice a week. We have offices in Bangalore and Noida in India. Eightfold.ai provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, or disability. Experience our comprehensive benefits with family medical, vision and dental coverage, a competitive base salary, and eligibility for equity awards and discretionary bonuses or commissions.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Are you analytically sharp and passionate about applying advanced analytics to impact business decisions? Come and be a driving force of the Amazon’s International Emerging Stores Shopping experience team. Amazon IES Shopping team owns the charter for defining the shopping experience across multiple category needs for Amazon's emerging markets. We are a large product organization solving key customer problems through Customer Insights, Tech development and Machine Learning capabilities. We work backwards of emerging customer needs and build solutions to scale these globally. We build adaptive experiences, that adapt to the customer, category and country whom we serve. Key job responsibilities The candidate will: - Build scalable solutions and self-serve platforms that will provide data/KPIs to inform business decision making - Investigate data sources across Amazon and expand existing device data infrastructure - Identify, develop, manage, and execute analyses to uncover areas of opportunity and present written business recommendations that will help grow the business - Develop a thorough understanding of customer behavior and external business drivers to inform decision making - Analyze key insight trends and build models that predict customer behavior, using statistical rigor to simplify and provide thought leadership to device product and marketing groups - Collaborate with finance, marketing, and product management as a leader of ongoing analytical support BASIC QUALIFICATIONS - Bachelor's degree or higher in a quantitative/technical field (e.g. Computer Science, Statistics, Engineering). - 2+ years of hands-on experience writing SQL queries. - Experience with building and maintain basic data artifacts (e.g. ETL, data models, queries).Experience with AWS services including S3, Redshift, EMR, Kinesis and RDS. - Experience in working and delivering end-to-end projects independently. - Knowledge of distributed systems as it pertains to data storage and computing. - Experience with one or more data visualization tools (e.g. Tableau, Quicksight, PowerBI) and statistical methods (e.g. t-test, Chi-squared). - 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling - Knowledge of SQL and data warehousing concepts - Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field PREFERRED QUALIFICATIONS Inquisitive mindset, with proven problem solving ability, and a passion for big data. Experience with building multi-dimensional data models to serve as a foundation for future analyses Experience building/operating highly available, distributed systems of data extraction, ingestion, data modelling and processing of large data sets Demonstrate proficiency with various approaches in regression, classification, and cluster analysis Knowledge and experience in data visualization/reporting software (e.g., Tableau). Advanced SQL / datamining skills and analytical tools (like R / Python / SAS) About The Team Amazon IES Shopping experience team owns the charter for defining the shopping experience across key country and category needs for Amazon's emerging markets. We are a large product organization solving key customer problems through Customer Insights, Tech development and Machine Learning capabilities. We work backwards of emerging customer needs and build solutions to scale these globally. Basic Qualifications 1+ years of tax, finance or a related analytical field experience 2+ years of complex Excel VBA macros writing experience Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Preferred Qualifications Experience working with Tableau Experience using very large datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A2983177 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview As an Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling - documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Strong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models). Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Data Engineer Position Summary The Data Engineer is responsible for building and maintaining data pipelines ensuring the smooth operation of data systems and optimizing workflows to meet business requirements This role will support data integration and processing for various applications Minimum Qualifications 6 Years overall IT experience with minimum 4 years of work experience in below tech skills Tech Skills Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience With Kafka And Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities Contribute to aligning data architecture with overall organizational solutions Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.