Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary Job Description: DBT Developer Position Summary We are seeking a skilled DBT (Data Build Tool) Developer to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining data transformation workflows using DBT, ensuring data quality and consistency across our analytics platforms. Key Responsibilities Ø Develop, build and test data transformation pipelines using DBT. Ø Implement and manage end-to-end data pipelines, ensuring data quality, reliability, and scalability. Ø Collaborate with data engineers, analysts, and business stakeholders to understand data requirements and deliver scalable solutions. Ø Implement and enforce best practices for data modeling, version control, and documentation within DBT projects. Ø Optimize SQL queries and DBT models for performance and reliability. Ø Monitor, troubleshoot, and resolve issues in data pipelines and transformations. Ø Ensure data quality through testing, validation, and documentation. Ø Participate in code reviews and contribute to continuous improvement of data engineering processes. Required Qualifications Ø Bachelor’s degree in computer science, Information Systems, Engineering, or a related field. Ø 3 + years of experience in data engineering or analytics roles. Ø Hands-on experience with DBT (Data Build Tool) in a production environment. Ø Hands-on experience with advanced DBT concepts like macros, materializations, snapshots, configurations etc. Ø Experience with relational databases (e.g., Snowflake, BigQuery, Redshift, PostgreSQL). Ø Proficiency in SQL for data manipulation, querying, and transformation is essential. Ø Familiarity with data warehousing concepts and ETL/ELT processes. Ø Familiarity with data modeling concepts (star/snowflake schema, normalization) Ø Experience with version control systems (e.g., Git). Ø Strong problem-solving skills and attention to detail. Ø Excellent communication and collaboration abilities. Preferred Qualifications Ø Experience with cloud data platforms (e.g., AWS, GCP, Azure). Ø Knowledge of data orchestration tools (e.g., Airflow, Prefect, Stonebranch). Ø Familiarity with CI/CD pipelines for data projects. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 305913
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Kivor is revolutionizing enterprise application support with Gen AI-powered autonomous agents. Our platform transforms data from logs, tickets, source code, and documentation into an application-specific knowledge graph, enabling faster issue detection and seamless autonomous resolutions. This innovation reduces support costs by over 50% and accelerates resolution times by up to 90%, ensuring business resiliency and empowering engineers to focus on growth-driven initiatives. Join us in redefining the future of enterprise application support and driving business value through scalable, future-ready solutions. Role Description This is a full-time hybrid role for a Senior Data Engineer located in Pune, The Senior Data Engineer will be responsible for designing, developing, and optimizing our data architecture. Day-to-day tasks include data modeling, implementing ETL processes, managing data warehousing solutions, and performing data analytics. The role involves collaborating with cross-functional teams to ensure data integrity and accessibility, enabling data-driven decision-making across the organization. Qualifications Proficiency in Databricks, Data Engineering and Data Modeling Experience with Extract Transform Load (ETL) processes and Data Warehousing Strong Data Analytics skills Excellent problem-solving and analytical skills Ability to work independently and in a hybrid environment Bachelor's or Master’s degree in Computer Science, Information Technology, or related field Experience with Gen AI-based solutions is a plus Strong written and verbal communication skills
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us: Latinem India – Global Capability Centre Latinem is a high-performance Global Capability Centre (GCC) delivering world-class strategic, digital, and operational support to Sobha’s markets across the Middle East, the United States, and Australia, the leading developer in the Middle East. Driven by excellence and innovation, Latinem is more than just a support hub — it is the intellectual engine fueling some of the most ambitious real estate projects worldwide. We bring together top-tier talent across functions, including Engineering, Design, Technology, Finance, Marketing, HR, Procurement, and Business Intelligence — operating at the intersection of precision and scale. 🔹 Global Vision, Made in India With its base in India, Latinem bridges time zones, cultures, and business priorities — enabling 24/7 business continuity, strategic execution, and seamless integration with international teams. 📌 Empowering Growth. Inspiring Excellence. Globally Aligned. Locally Brilliant. Always World-Class. Job Summary: We are seeking an experienced Data Engineer to join our team. The Data Engineer will be responsible for designing, developing, and maintaining our company's data architecture and data pipelines. The ideal candidate will have a strong background in data engineering, database management, and programming, with a passion for building scalable and efficient data solutions. Key Responsibilities: Proven experience as a Data Engineer with a focus on the Azure tech stack. Experienced in Designing, developing, and maintaining scalable data pipelines using Azure Data Factory, Databricks, and other Azure services. Implement and maintain ETL processes for ingesting, transforming, and loading data from various sources into Azure. In-depth knowledge of Azure services such as Azure Data Factory, Azure Databricks, and Azure SQL Database. Proficient in TSQL, Python, or other relevant programming languages. Experience with data modelling, data warehousing, and data integration. Familiarity with data security and compliance in Azure. Excellent analytical and critical thinking skills. Effective communication and collaboration skills. Education, Experience and Skills Required: Bachelor’s degree in computer science, Information Systems. Microsoft Certified: Azure Data Engineer Associate. Experience with big data technologies such as Azure Data Lake Storage and Azure Synapse Analytics. Familiarity with DevOps practices in a data engineering context. Knowledge of SQL and database management. Knowledge in the Real Estate and Construction domain will be an added advantage
Posted 6 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad India Type of Position: Full Time Introduction: Pragmatic Play is one of the world’s leading suppliers of online slots, casinos, live dealer and bingo games with new and exciting products and verticals added on a continuous basis. Pragmatic Play currently employs over 2,000 people in over 12 locations and has seen consistent, triple-digit growth year on year.Our award-winning slots portfolio contains unique in-house content consisting of over 200 proven HTML5 games, available in many currencies, 31 languages and all major certified markets. Millions of players play our games every day across hundreds of operators such as Flutter, Bet365, Entain, Kindred, Gamesys, LeoVegas, Betsson and many others. We are a team of passionate individuals with the mission to succeed and create industry-leading games that players love. The Role: This role will be required to build ETL data pipelines for Data warehousing. The candidate needs to have Hands-on experience with SQL and any of standard ETL tools like Talend, Informatica, Pentaho, etc. One is expected to have good data modelling skills to transform data as per business needs. And should be aware of scripting languages like Python, Shell Scripting or Java and exposure to big data technologies like Spark/Scala/PySpark will be added advantage. The successful candidate will have experience in building and managing complex Datamart’s and develop database solutions to ensure company information is stored effectively and securely. Also, the candidate should be a self-starter, with strong attention to detail, vocally self-critical, with an ability to work in a fast-paced environment. Key Responsibilities: Develop database solutions to store and retrieve company information Design conceptual and logical data models and flowcharts Improve system performance by conducting tests, troubleshooting, and integrating new elements Optimize new and current database systems Use scripting languages to automate key processes governing data movement, cleansing, and processing activities Key skills: Excellent analytical skills & Quantitative skills: Proven analytical and quantitative skills, and tools to perform analysis. Ability to use hard data and metrics to back up assumptions and develop project business cases Attention to detail: While understanding the big picture, you are an organized and detail-oriented person. Nothing gets missed on your watch and you can lead projects to completion. Working to the highest standards, even under pressure: You’re ambitious, hold yourself to high standards and thrive in a dynamic, high-energy environment. Passion for i-gaming: You love games and want to be part of a winning team. Requirements: 2+ years of strong experience with data transformation & ETL on large data sets 2+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data) 2+ years of complex SQL or NoSQL experience Experience in advanced Data Warehouse concepts Expertise in SQL and advance SQL. Experience in industry ETL tools (i.e., Informatica, Talend) Experience with Reporting Technologies (i.e., Tableau, PowerBI) Strong verbal & written communication skills Must be self-managed, proactive and customer focused Experience in programming languages (Python, Java, or Bash scripting) Good to have Experience with Big Data technologies (i.e., Hadoop, Spark, Redshift, Vertica, Hive, etc.) Experience as an enterprise technical or engineer consultant Degree in Computer Science, Information Systems, Data Science, or related field
Posted 6 days ago
3.0 years
3 - 6 Lacs
Hyderābād
On-site
Job Description: ETL Test Automation - Senior Test Engineer We are looking for a highly skilled and experienced ETL Test Automation as a Senior Test Engineer. Technical Expertise: Experience should have 3 years to 5 years of ETL/DW test automation. Strong knowledge of ETL processes, data warehouse concepts and database testing. Experience in big data testing, focusing on both automated and manual testing for data validation. Proficient in writing complex SQL queries (preferably BigQuery) and understanding database concepts. Understanding of GCP tools: BigQuery, Dataflow, Dataplex, Cloud Storage. Ability to transform simple/complex business logic into SQL queries. Hands-on experience in Python for test automation. Familiarity with test automation frameworks. Excellent communication and client-facing skills. Experience with version control systems like GITlab and test management tools such as JIRA and confluence. Demonstrated experience working in an Agile/SCRUM environment. GCP certifications or training in cloud data engineering. Familiarity with data governance, metadata management, and data forms. Exposure to real-time/streaming data systems, including monitoring, validation, and scaling strategies. Key Responsibilities: Design, execute, and maintain QA strategies for ETL/Data Warehouse workflows on Google Cloud Platform (GCP). Validate large-scale data migrations to ensure accuracy and completeness between source and target systems. Develop and maintain automation scripts using Python or any relevant automation tool. Identify, investigate and resolve data anomalies and quality issues Write and optimize complex SQL queries (preferably for BigQuery) to validate transformations, mapping, and business rules. Work closely with data engineers, architects and analysts to understand data requirements and support data quality initiatives. Collaborate in an Agile/SCRUM development environment. Perform manual and automated data validations for high-volume pipelines. Track and manage defects using JIRA and maintain transparency via Confluence.
Posted 6 days ago
7.0 years
0 Lacs
Haryana, India
On-site
ETL QA - Technical Lead Experience: 7 to 11 Years Job Locations: Hyderabad (1 position) | Gurgaon (1 position) Job Summary: We are looking for a highly skilled and detail-oriented ETL QA - Technical Lead with strong experience in Big Data Testing , Hadoop ecosystem , and SQL validation . The ideal candidate should have hands-on experience in test planning, execution, and automation in a data warehouse/ETL environment. You'll work closely with cross-functional teams in an Agile environment to ensure the quality and integrity of large-scale data solutions. Key Responsibilities: Lead end-to-end testing efforts for data/ETL pipelines across big data platforms Design and implement test strategies for validating large datasets, transformations, and integrations Perform hands-on testing of Hadoop-based data platforms (HDFS, Hive, Spark, etc.) Develop complex SQL queries for data validation and business rule testing Collaborate with developers, product owners, and business analysts in Agile ceremonies Own test planning, test case design, defect tracking, and reporting for assigned modules Identify areas of automation and build reusable QA assets Drive QA best practices and mentor junior QA team members Required Skills: 7-11 years of experience in Software Testing, with at least 3+ years in Big Data/Hadoop testing Strong hands-on experience in testing Hadoop components like HDFS, Hive, Spark, Sqoop, etc. Proficient in SQL (complex joins, aggregations, data validation) Experience in ETL/Data Warehouse testing Familiarity with data ingestion, transformation, and validation techniques
Posted 6 days ago
3.0 years
1 - 6 Lacs
Gurgaon
On-site
About the Role: We are seeking an experienced AI Engineer who brings together the best of data analytics, cloud computing, and scalable AI application development . You will be responsible for designing, developing, and deploying AI solutions that leverage real-time data pipelines, APIs, and containerized microservices. Key Responsibilities: Develop, deploy, and maintain AI/ML models and pipelines in production environments. Build and manage FastAPI -based APIs to serve AI models and analytics results. Design scalable and secure cloud-based architectures for AI services (AWS/Azure/GCP). Containerize applications using Docker and orchestrate with Kubernetes . Collaborate with data scientists, backend engineers, and DevOps teams to integrate models into applications. Optimize data ingestion, preprocessing, and model inference pipelines for performance and reliability. Monitor and improve model accuracy and system performance post-deployment. Required Skills & Qualifications: Bachelor's or Master’s degree in Computer Science, AI/ML, Data Engineering, or related field. 3+ years of experience in AI/ML engineering or backend data systems. Proficient in Python with experience in FastAPI , NumPy, pandas, and ML libraries (scikit-learn, PyTorch, or TensorFlow). Strong understanding of cloud services (AWS/GCP/Azure) and deployment best practices. Hands-on experience with Docker and Kubernetes for scalable service orchestration. Solid background in data analytics , including ETL, big data processing, and model interpretation. Familiarity with CI/CD tools and MLOps frameworks is a plus. Nice to Have: Experience with real-time data pipelines (Kafka, Spark Streaming, etc.) Knowledge of database systems (SQL and NoSQL) Exposure to monitoring tools like Prometheus, Grafana, or ELK stack Prior experience working with edge AI or distributed ML environments Job Type: Full-time Pay: ₹15,773.05 - ₹52,938.83 per month Work Location: In person
Posted 6 days ago
2.0 years
4 - 6 Lacs
India
On-site
Hiring Power BI Engineer/Developer Opportunity! location - Gurgaon What you'll do: Design and develop impactful BI reports and dashboards. Manage and maintain data integrity. Collaborate with stakeholders to understand their data needs. Ensure the accuracy and reliability of all data. Troubleshoot and maintain existing BI solutions. Implement and enforce data security measures. What we're looking for: Minimum 2 years of experience in Power BI development. B.Tech /B.E. in Computer Science or Information Technology. Expert-level proficiency in Power BI. Strong knowledge of MS SQL Server, DAX, and other data languages. Experience with Data ETL/import/Export Tools and BI Integration Tools. Familiarity with ASP.NET Applications is a plus. Job Type: Full-time Pay: ₹40,000.00 - ₹55,000.00 per month Benefits: Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: DLF Ph-II, Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): can you appear for face to face round interview? Education: Bachelor's (Required) Experience: Power BI: 2 years (Required) Location: DLF Ph-II, Gurugram, Haryana (Required) Work Location: In person
Posted 6 days ago
8.0 years
0 Lacs
India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities: · Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. · Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. · Facilitate discussions between technical and non-technical stakeholders to drive project success. · Document technical solutions and design specifications clearly and concisely. Qualifications: · Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. · 8+ years of experience in big data architecture and engineering. · Extensive experience with PySpark, SparkSQL, and Databricks on Azure. · Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. · Strong experience in data modeling, metadata frameworks, and effort estimation. · Experience of DevSecOps practices with proficiency in Git. · Demonstrated experience in implementing data quality, data security, and data governance measures. · Industry experience in banking, insurance, or pharma is a significant plus. · Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. · Certification in Azure, Databricks or related Cloud technologies is a must. · Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory skill sets: Data Architect/Data Engineer/AWS Preferred skill sets: Data Architect/Data Engineer/AWS Years of experience required: 8-12 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Architecture Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 6 days ago
8.0 years
0 Lacs
India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities: · Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. · Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. · Facilitate discussions between technical and non-technical stakeholders to drive project success. · Document technical solutions and design specifications clearly and concisely. Qualifications: · Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. · 8+ years of experience in big data architecture and engineering. · Extensive experience with PySpark, SparkSQL, and Databricks on Azure. · Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. · Strong experience in data modeling, metadata frameworks, and effort estimation. · Experience of DevSecOps practices with proficiency in Git. · Demonstrated experience in implementing data quality, data security, and data governance measures. · Industry experience in banking, insurance, or pharma is a significant plus. · Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. · Certification in Azure, Databricks or related Cloud technologies is a must. · Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory skill sets: Data Architect/Data Engineer/AWS Preferred skill sets: Data Architect/Data Engineer/AWS Years of experience required: 8-12 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 6 days ago
2.0 years
7 - 10 Lacs
Bengaluru
On-site
DESCRIPTION About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview of the role: An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include: Works across team(s) and Ops organization at country, regional and/or cross regional level to create automated solutions for customer, cost savings through process automation, systems configuration and performance metrics. Has logical reasoning, critical thinking, problem solving abilities for automation scripting. Has framework engineering abilities and follows automation development best practices. Automate user interactions and API with existing tools/solutions, build localized small scale solutions for quick deployment, build python scripts to automate day to day, repeatable activities within a team Optionally, an Automation Expert may build front end UI for web application Prioritizes projects and feature sets, evaluate and set stakeholders expectations for Amazon’s marketplace: country, regional and/ or cross regional level. Writes clear and detailed functional specifications based on business requirements as well as writes and reviews business cases. Applies rigorous approach to problem solving. Credible business partner to Amazon’s Operations network. Possesses relevant understanding and experience on automation processes and workflow. Able to dive deep in the automation process to correct under-performing parts and acts as a trouble shooter. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As an Automation Expert you are responsible for working with cross-functional teams to develop small-medium scale long term automated solutions using API, Selenium, Python and other tools, and utilize automation metrics to determine improvement opportunities. Working in a dynamic environment, you will be responsible for monitoring key success metrics. You will be expected to quickly become a subject matter expert of automation, and help business leaders improve automation penetration, make better decisions, and generate value. In this role, you are expected to work closely with your peers, operations managers to understand potential business automation use cases, and convert them into automated solutions. BASIC QUALIFICATIONS Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL 2+ years of tax, finance or a related analytical field experience PREFERRED QUALIFICATIONS Knowledge of Python, VBA, Macros, Selenium scripts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
5.0 years
4 - 6 Lacs
Bengaluru
On-site
DESCRIPTION Amazon’s ROW (Rest of World) Supply Chain Analytics team is looking for talented Business Intelligence Engineers who develop solutions to better manage/optimize speed and operations planning while providing the best experience to our customers at the lowest possible price. Our team members have an opportunity to be at the forefront of supply chain thought leadership by working on some of the most difficult problems with some of the best research scientists, product/program managers, software developers and business leaders in the industry, shaping our roadmap to drive real impact on Amazon's long-term profitability. We are an agile team, building new analysis from ground up, proposing new concepts and technology to meet business needs, and enjoy and excel at diving into data to analyze root causes and implement long-term solutions. As a BIE within the group, you will analyze massive data sets, identify areas to improve, define metrics to measure and monitor programs, build models to predict and optimize and most importantly work with different stakeholders to drive improvements over time. You will also work closely with internal business teams to extract or mine information from our existing systems to create new analysis, build analytical products and cause impact across wider teams in intuitive ways. This position provides opportunities to influence high visibility/high impact areas in the organization. They are right a lot, work very efficiently, and routinely deliver results on time. They have a global view of the analytical and/or science solutions that they build and consistently think in terms of automating, expanding, and scaling the results broadly. This position also requires you to work across a variety of teams, including transportation, operations, finance, delivery experience, people experience and platform (software) teams. Successful candidates must thrive in fast-paced environments which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, extract and manipulate data across various data marts, and align research focuses on Amazon’s strategic needs. We are looking for people with a flair for recognizing trends and patterns while correlating it to the business problem at hand. If you have an uncanny ability to decipher the exact policy/mechanism/solution to address the challenge and ability to influence folks using hard data (and some tact) then we are looking for you! Key job responsibilities Analysis of historical data to identify trends and support decision making, including written and verbal presentation of results and recommendations Collaborating with product and software development teams to implement analytics systems and data structures to support large-scale data analysis and delivery of analytical and machine learning models Mining and manipulating data from database tables, simulation results, and log files Identifying data needs and driving data quality improvement projects Understanding the broad range of Amazon’s data resources, which to use, how, and when Thought leadership on data mining and analysis Modeling complex/abstract problems and discovering insights and developing solutions/products using statistics, data mining, science/machine-learning and visualization techniques Helping to automate processes by developing deep-dive tools, metrics, and dashboards to communicate insights to the business teams Collaborating effectively with internal end-users, cross-functional software development teams, and technical support/sustaining engineering teams to solve problems and implement new solutions About the team ROW (Rest of World) Supply Chain analytics team is hiring multiple BIE roles in speed, planning, inbound and SNOP functions. The role will be responsible for generating insights, defining metrics to measure and monitor, building analytical products, automation and self-serve and overall driving business improvements. The role involves combination of data-analysis, visualization, statistics, scripting, a bit of machine learning and usage of AWS services. BASIC QUALIFICATIONS 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience with forecasting and statistical analysis Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
2.0 years
4 - 6 Lacs
Bengaluru
On-site
DESCRIPTION Are you passionate about transforming complex data into actionable business insights at a global scale? RBS Brand Experience (formerly APIE) is seeking an experienced Business Intelligence Engineer who thrives on ambiguity and can decipher evolving business needs to shape data-driven solutions. As a Business Intelligence Engineer, you'll be at the intersection of data and business strategy, translating complex requirements into actionable analytics solutions. You'll partner with stakeholders to unlock insights that elevate our global work authorization experiences and drive program scalability. Key job responsibilities A successful candidate will demonstrate: Advanced SQL skills for writing complex queries and stored procedures to extract, transform, and analyze large datasets Proficiency in Python, particularly with libraries like pandas and PySpark, for data manipulation and ETL processes Strong analytical and problem-solving capabilities, with the ability to translate business requirements into efficient data solutions Experience in designing and implementing scalable ETL pipelines that can handle large volumes of data Expertise in data modeling and database optimization techniques to improve query performance Ability to work with various data sources and formats, integrating them into cohesive data structures Skill in developing and maintaining data warehouses and data lakes Proficiency in using BI tools to create insightful visualizations and dashboards Ability to thrive in ambiguous situations, identifying data needs and proactively proposing solutions Excellence in communicating technical concepts and data insights to both technical and non-technical audiences Customer-centric mindset with a focus on delivering data solutions that drive business value" A day in the life You'll work closely with Product Managers, Software Developers, and business stakeholders to: Build and maintain dashboards that drive business decisions Perform deep-dive analyses to uncover actionable insights Develop and automate data processes to improve efficiency Present findings and recommendations to leadership Partner with global teams to implement data-driven solutions BASIC QUALIFICATIONS 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports PREFERRED QUALIFICATIONS Bachelor's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
5.0 years
0 - 0 Lacs
Bengaluru
On-site
About Invicta Learning: Invicta Learning is a premier upskilling partner focused on delivering industry-relevant, hands-on training programs that prepare learners for the evolving tech landscape. We are conducting a comprehensive Data Engineering with AWS program aimed at building core competencies in cloud-based data engineering tools and platforms. Program Overview: This 40-day intensive training program focuses on building foundational to advanced skills in data engineering using AWS. The curriculum includes SQL, Data Warehousing, ETL/ELT, Python, Spark, AWS Glue, Redshift, Athena, Databricks, Airflow, and more. The program is enriched with power skills sessions and assessments to ensure learner readiness for real-world deployment. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum as per Invicta’s standardized content. Facilitate hands-on sessions and practical labs on tools such as SQL, Python, Spark, AWS Glue, Athena, Redshift, Databricks, Airflow , and related services. Guide learners through data modeling, ETL pipelines, real-time analytics, and data lake implementations using AWS. Provide mentorship, doubt-clearing sessions, and code reviews to support learners’ growth. Conduct module-level and sprint-level assessments , ensuring learner performance tracking. Work closely with program coordinators to maintain daily attendance, learner engagement, and feedback loops. Support in learner evaluations through quizzes, coding challenges, and mock interviews. Keep learners updated on industry best practices, tools, and career tips related to data engineering. Encourage a collaborative, interactive, and inclusive learning environment. Curriculum Highlights: You will be expected to train learners on topics including but not limited to: SQL (Oracle & ANSI SQL) : DDL, DML, Joins, Views, Subqueries, Transactions Data Warehousing & BI Concepts : OLAP, OLTP, Star/Snowflake Schema, Data Vault ETL/ELT Processes : Architecture, Tools (AWS Glue, DMS), Metadata Handling Programming in Python : Data structures, Functions, File I/O, OOPs, Regex AWS Fundamentals : IAM, EC2, S3, RDS, VPC, Monitoring Spark & PySpark : RDDs, DataFrames, DAGs, SparkSQL, Streaming AWS Data Services : Glue, Athena, Redshift, Kinesis, Data Pipeline Databricks : Workspace, Jobs, Clusters, Data Lake Integration Airflow & Step Functions : Workflow orchestration and automation Candidate Requirements: Minimum 5 years of experience in data engineering, cloud technologies, or related training delivery. Proven expertise in AWS Data Services , Spark, and Python programming. Strong knowledge of Data Warehousing principles, ETL/ELT design , and modern data architecture. Hands-on experience with Databricks, Airflow, Glue, Athena, Redshift , and other AWS data tools. Excellent presentation, communication, and facilitation skills. Prior experience in classroom or virtual training delivery is preferred. Certifications such as AWS Certified Data Analytics, AWS Solutions Architect , or equivalent are a plus. Preferred Skills: Exposure to real-time data processing tools like Kinesis, Kafka, or Spark Streaming. Understanding of data security, compliance, and monitoring in AWS. Familiarity with deployment pipelines and containerized environments is a plus. Perks: Work with a dynamic and passionate team of trainers. Opportunity to upskill further with Invicta's partner ecosystem. Competitive compensation and performance incentives. Job Type: Contractual / Temporary Contract length: 40 days Pay: ₹700.00 - ₹800.00 per hour Schedule: Day shift Work Location: In person
Posted 6 days ago
3.0 years
4 - 8 Lacs
Bengaluru
On-site
DESCRIPTION About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. Overview of the role The Business research Analyst will be responsible for data and Machine learning part of continuous improvement projects across compatibility and basket building space. This will require collaboration with local and global teams, which have process and technical expertise. Therefore, RA should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. In compatibility program, RA perform Big data analysis to identify patterns, train model to generate product to product relationship and product to brand & model relationship. RA also continuously improve the ML solution for higher solution accuracy, efficiency and scalability. RA should writes clear and detailed functional specifications based on business requirements as well as writes and reviews business cases. Key job responsibilities Scoping, driving and delivering complex projects across multiple teams. Performs root cause analysis by understand the data need, get data / pull the data and analyze it to form the hypothesis and validate it using data. Conducting a thorough analysis of large datasets to identify patterns, trends, and insights that can inform the development of NLP applications. Developing and implementing machine learning models and deep learning architectures to improve NLP systems. Designing and implementing core NLP tasks such as named entity recognition, classification and part-of-speech tagging. Dive deep to drive product pilots, build and analyze large data sets, and construct problem hypotheses that help steer the product feature roadmap (e.g. with use of Python), tools for database (e.g. SQL, spark) and ML platform (tensorflow, pytorch) Conducting regular code reviews and implementing quality assurance processes to maintain high standards of code quality and performance optimization. Providing technical guidance and mentorship to junior team members and collaborating with external partners to integrate cutting-edge technologies. Find the scalable solution for business problem by executing pilots and build Deterministic and ML model (plug and play on readymade ML models and python skills). Performs supporting research, conduct analysis of the bigger part of the projects and effectively interpret reports to identify opportunities, optimize processes, and implement changes within their part of project. Coordinates design effort between internal team and external team to develop optimal solutions for their part of project for Amazon’s network. Ability to convince and interact with stakeholders at all level either to gather data and information or to execute and implement according to the plan. About the team Amazon.com operates in a virtual, global eCommerce environment without boundaries, and operates a diverse set of businesses in 14 countries, including Retail, third party marketplaces, eCommerce platforms, web services for developers. Retail Business Service (RBS) organization is a core part of leading customer experience and selling partners experience optimization. This team is part of RBS Customer Experience business unit. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The compatibility program handled by this team has a direct impact on customer buying decisions and online user experience. Compatibility program aims to address Customer purchase questions if two products works together, as well as reduce return due to incompatibility. BASIC QUALIFICATIONS Basic Qualifications Ability to analyse and then articulate business issues to a wide range of audiences using strong data, written and verbal communication skills Good mastery of BERT and other NLP frameworks such as GPT-2, XLNet, and Transformer models Experience in NLP techniques such as tokenization, parsing, lexing, named entity recognition, sentiment analysis and spellchecking Strong problem-solving skills, creativity and ability to overcome challenges SQL/ETL, Automation Tools Relevant bachelor’s degree or higher 3+ years combined of relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles Be self motivated and autonomous with an ability to prioritize well, and remain focused when working within a team located in across several countries and time zones PREFERRED QUALIFICATIONS Preferred Qualifications 3+ years combined of relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles Understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service Experience with computer vision algorithms and libraries such as OpenCV, TensorFlow, Caffe or PyTorch. Technical expertise, experience in Data science and ML Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
3.0 years
4 - 6 Lacs
Bengaluru
On-site
DESCRIPTION SCOT AIM team is seeking an exceptional Business Intelligence Engineer to join our innovative Inventory automation analytics team. This pioneering role will be instrumental in building and scaling analytics solutions that drive critical business decisions across inventory management, supply chain optimization and channel performance. You will work closely with Scientists, Product Managers, other Business Intelligence Engineers, and Supply Chain Managers to build scalable, high insight - high impact products and own improvements to business outcomes within your area, enabling WW and local solutions for retail Key job responsibilities Work with Product Managers to understand customer behaviors, spot system defects, and benchmark our ability to serve our customers, improving a wide range of internal products that impact selection decisions both nationally and regionally. Design and develop end-to-end analytics solutions to monitor and optimize supply chain metrics, including and not limited to availability, placement, inventory efficiency and capacity planning & management at various business hierarchies. Create interactive dashboards and automated reporting systems to enable deep-dive analysis of inventory performance across multiple dimensions (ASIN/GL/Sub-category/LOB/Brand level). Build predictive models for seasonal demand forecasting and inventory planning, supporting critical business events and promotions. Create scalable solutions for tracking deal inventory readiness for small events and channel share management. Partner with category & business stakeholders to identify opportunities for process automation and innovation. A day in the life Pioneering new analytical approaches and establishing best practices. Building solutions from the ground up with significant autonomy. Driving innovation in supply chain analytics through automation and advanced analytics. Making a direct impact on business performance through data-driven decision making. About the team Have you ever ordered a product on Amazon and when that box with the smile arrived, wondered how it got to you so fast? Wondered where it came from and how much it cost Amazon? If so, Amazon’s Supply Chain Optimization Technology (SCOT) organization is for you. At SCOT, we solve deep technical problems and build innovative solutions in a fast-paced environment working with smart & passionate team members. (Learn more about SCOT: http://bit.ly/amazon-scot) BASIC QUALIFICATIONS 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience working directly with business stakeholders to translate between data and business needs Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
5.0 years
4 - 8 Lacs
Bengaluru
On-site
DESCRIPTION Key Responsibilities: Own and develop advanced substitutability analysis frameworks combining text-based and visual matching capabilities Drive technical improvements to product matching models to enhance accuracy beyond current 79% in structured categories Design category-specific matching criteria, particularly for complex categories like fashion where accuracy is currently at 20% Develop and implement advanced image matching techniques including pattern recognition, style segmentation, and texture analysis Create performance measurement frameworks to evaluate product matching accuracy across different product categories Partner with multiple data and analytics teams to integrate various data signals Provide technical expertise in scaling substitutability analysis across 2000 different product types in multiple markets Technical Requirements: Deep expertise in developing hierarchical matching systems Strong background in image processing and visual similarity algorithms Experience with large-scale data analysis and model performance optimization Ability to work with multiple data sources and complex matching criteria Key job responsibilities Success Metrics: Drive improvement in substitutability accuracy to >70% across all categories Reduce manual analysis time for product matching identification Successfully implement enhanced visual matching capabilities Create scalable solutions for multi-market implementation A day in the life Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, Oracle, and OLAP technologies. Provide on-line reporting and analysis using OBIEE business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Analyze source data systems and drive best practices in source teams. Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance. Produce comprehensive, usable dataset documentation and metadata. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools. Mentor junior Business Research Analysts. About the team The RBS-Availability program includes Selection Addition (where new Head-Selections are added based on gaps identified by Selection Monitoring-SM), Buyability (ensuring new HS additions are buyable and recovering established ASINs that became non-buyable), SoROOS (rectify defects for sourceble out-of-stock ASINs ) Glance View Speed (offering ASINs with the best promise speed based on Store/Channel/FC level nuances), Emerging MPs, ASIN Productivity (To have every ASINS actual contribution profit to meet or exceed the estimate). The North-Star of the Availability program is to "Ensure all customer-relevant (HS) ASINs are available in Amazon Stores with guaranteed delivery promise at an optimal speed." To achieve this, we collaborate with SM, SCOT, Retail Selection, Category, and US-ACES to identify overall opportunities, defect drivers, and ingress across forecasting, sourcing, procurability, and availability systems, fixing them through UDE/Tech-based solutions. BASIC QUALIFICATIONS 5+ years of SQL experience Experience programming to extract, transform and clean large (multi-TB) data sets Experience with theory and practice of design of experiments and statistical analysis of results Experience with AWS technologies Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience with theory and practice of information retrieval, data science, machine learning and data mining PREFERRED QUALIFICATIONS Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
1.0 years
4 - 6 Lacs
Bengaluru
On-site
DESCRIPTION Amazon is a place where data drives most of our decision-making. Analytics, Operations & Programs (AOP) team is looking for a dynamic data engineer who can be innovative, strong problem solver and can lead the implementation of the analytical data infrastructure that will guide the decision making. As a Data Engineer, you think like an entrepreneur, constantly innovating and driving positive change, but more importantly, you consistently deliver mind-boggling results. You're a leader, who uses both quantitative and qualitative methods to get things done. And on top of it all, you're someone who wonders "What if?" and then seeks out the solution. This position offers exceptional opportunities to grow their technical and non-technical skills. You have the opportunity to really make a difference to our business by inventing, enhancing and building world class systems, delivering results, working on exciting and challenging projects. As a Data Engineer, you are responsible for analyzing large amounts of business data, solve real world problems, and develop metrics and business cases that will enable us to continually delight our customers worldwide. This is done by leveraging data from various platforms such as Jira, Portal, Salesforce. You will work with a team of Product Managers, Software Engineers and Business Intelligence Engineers to automate and scale the analysis, and to make the data more actionable to manage business at scale. You will own many large datasets, implement new data pipelines that feed into or from critical data systems at Amazon. You must be able to prioritize and work well in an environment with competing demands. Successful candidates will bring strong technical abilities combined with a passion for delivering results for customers, internal and external. This role requires a high degree of ownership and a drive to solve some of the most challenging data and analytic problems in retail. Candidates must have demonstrated ability to manage large-scale data modeling projects, identify requirements and tools, build data warehousing solutions that are explainable and scalable. In addition to the technical skills, a successful candidate will possess strong written and verbal communication skills and a high intellectual curiosity with ability to learn new concepts/frameworks and technology rapidly as changes arise. Key job responsibilities Design, implement and support an analytical data infrastructure Managing AWS resources including EC2, EMR, S3, Glue, Redshift, etc. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with Data Scientists and Business Intelligence Engineers (BIEs) to recognize and help adopt best practices in reporting and analysis Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Maintain internal reporting platforms/tools including troubleshooting and development. Interact with internal users to establish and clarify requirements in order to develop report specifications. Work with Engineering partners to help shape and implement the development of BI infrastructure including Data Warehousing, reporting and analytics platforms. Contribute to the development of the BI tools, skills, culture and impact. Write advanced SQL queries and Python code to develop solutions A day in the life This role requires you to live at the intersection of data, software, and analytics. We leverage a comprehensive suite of AWS technologies, with key tools including S3, Redshift, DynamoDB, Lambda, API's, Glue. You will drive the development process from design to release. Managing data ingestion from heterogeneous data sources, with automated data quality checks. Creating scalable data models for effective data processing, storage, retrieval, and archiving. Using scripting for automation and tool development, which is scalable, reusable, and maintainable. Providing infrastructure for self serve analytics and science use cases. Using industry best practices in building CI/CD pipelines About the team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams BASIC QUALIFICATIONS 1+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
4.0 years
0 Lacs
Bengaluru
On-site
DESCRIPTION Amazon Transportation team is looking for an innovative, hands-on and customer-obsessed Business Analyst for Analytics team. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Key job responsibilities 1) Apply multi-domain/process expertise in day to day activities and own end to end roadmap. 2) Translate complex or ambiguous business problem statements into analysis requirements and maintain high bar throughout the execution. 3) Define analytical approach; review and vet analytical approach with stakeholders. 4) Proactively and independently work with stakeholders to construct use cases and associated standardized outputs 5) Scale data processes and reports; write queries that clients can update themselves; lead work with data engineering for full-scale automation 6) Have a working knowledge of the data available or needed by the wider business for more complex or comparative analysis 7) Work with a variety of data sources and Pull data using efficient query development that requires less post processing (e.g., Window functions, virt usage) 8) When needed, pull data from multiple similar sources to triangulate on data fidelity 9) Actively manage the timeline and deliverables of projects, focusing on interactions in the team 10) Provide program communications to stakeholders 11) Communicate roadblocks to stakeholders and propose solutions 12) Represent team on medium-size analytical projects in own organization and effectively communicate across teams A day in the life 1) Solve ambiguous analyses with less well-defined inputs and outputs; drive to the heart of the problem and identify root causes 2) Have the capability to handle large data sets in analysis through the use of additional tools 3) Derive recommendations from analysis that significantly impact a department, create new processes, or change existing processes 4) Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing 5) Identify and implement optimal communication mechanisms based on the data set and the stakeholders involved 6) Communicate complex analytical insights and business implications effectively About the team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. BASIC QUALIFICATIONS 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business 4+ years of ecommerce, transportation, finance or related analytical field experience PREFERRED QUALIFICATIONS Experience in Statistical Analysis packages such as R, SAS and Matlab Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
3.0 years
7 - 10 Lacs
Bengaluru
On-site
DESCRIPTION Are you passionate about solving business challenges at a global scale? Amazon Employee Services is looking for an experienced Business Analyst to join Retail Business Services team and help unlock insights which take our business to the next level. The candidate will be excited about understanding and implementing new and repeatable processes to improve our employee global work authorization experiences. They will do this by partnering with key stakeholders to be curious and comfortable digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions of work authorization, and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Responsibilities: Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. Partner with operations/business teams to consult, develop and implement KPI’s, automated reporting/process solutions, and process improvements to meet business needs. Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. Participate in strategic and tactical planning discussions. Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs. Excellent writing skills, to create artifacts easily digestible by business and tech partners. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Operations Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area BASIC QUALIFICATIONS 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages 1+ years of tax, finance or a related analytical field experience PREFERRED QUALIFICATIONS Experience in Amazon Redshift and other AWS technologies Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in SCALA and Pyspark Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
2.0 years
5 - 8 Lacs
Bengaluru
On-site
DESCRIPTION The Retail Business Services (RBS) group is an integral part of Amazons online product life-cycle and supports buying operations. The team’s primary role is to support the creation and enhancement of retail selection on the worldwide Amazon online catalog. The tasks handled by this group can impact online user experience. The successful Subject Matter Expert is a problem-solver, mentor and communicator with strong expertise in process optimizations and systems thinking. You will engage directly with multiple internal teams to drive business projects for the RBS team. You will utilize a wide range of skills and work across major functional areas such operations, vendor management and program management, to independently drive the performance improvement projects. In this role you will be focused on the execution and operational aspects of critical work involved for Amazon customers (vendors/vendor managers/end customer), root cause analysis of issues and opportunities affecting the business. Please note that you will be expected to specifically work on transactional but business critical activities and have a hands-on approach. Responsibilities Include: Success will be measured by the performance of your internal teams on input metrics and individual project deliverables Build strong communication channels at all levels, set proper expectations, provide clear status communications, and manage towards a growth plan for the vendors Work with various internal teams to help drive tools and process improvements that affect vendor/catalog management workflows Drive appropriate data oriented analysis, adoption of technology solutions and process improvement projects to achieve operational and business goals Ensure high quality standards for interviewing and hiring employees at all levels of the organization Work with internal Amazon teams/vendors to improve operational aspects of their business in providing a great consumer experience Conduct deep dive analysis on the handled issues and publish recommendations and action plans based on data to prevent future failure Provide thought leadership around planning, roadmaps and execution Support the launches of new programs, categories and features Ensure that all in-house systems and procedures are updated, revised and modified BASIC QUALIFICATIONS 2+ years of program or project management experience Experience using data to influence business decisions Bachelor's degree Speak, write, and read fluently in English PREFERRED QUALIFICATIONS Knowledge of analytics & statistical tools such as SAS, PowerBI, SQL & ETL DW concepts Experience back office operations, escalation management and troubleshooting environments Experience in design and execution of analytics projects Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
1.0 years
0 Lacs
Karnataka
On-site
DESCRIPTION Are you passionate about solving business challenges at a global scale? Retail Business Services - BX team is looking for an experienced Business Analyst to join Retail Business Services to enable insights which help our selling partners to take their businesses to next level. The candidate will have to understand and implement new and repeatable processes to improve our strategic insights for selling partner. They will do this by partnering stakeholders and digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Key job responsibilities Key job responsibilities An ideal candidate for this role: Will have relevant experience in data analytics working with large data sets and to extract and transform data using various tools and technologies Will transform data into actionable business information, and will make it readily accessible to stakeholders worldwide Will use data to support ideas, drive actionable outcomes, and provide unique ways to present data and information in an easy to consume format Will be passionate about finding root causes, trends, and patterns and how they impact business. Will draw inferences and conclusions, create dashboards and visualizations of processed data Will have business and communication skills to be able to work with product owners to understand key business questions to build reports that enable product owners to answer those questions quickly and accurately . Will be very comfortable juggling competing priorities and handling some level of ambiguity . Will thrive in an agile and fast-paced environment on highly visible projects and initiatives About the team Amazon.com, Inc. (NASDAQ:AMZN), a Fortune 500 company based in Seattle, opened on the World Wide Web in July 1995 and today offers Earth's Biggest Selection. Amazon.com, Inc. seeks to be Earth's most customer-centric company, where customers can find and discover anything they might want to buy online, and endeavors to offer its customers the lowest possible prices. Today, we operate retail websites in nine countries, offering millions of products in more than 40 categories worldwide, and we still like to work hard, have fun and make history! Retail Business Services (RBS) leverages technology to improve customer experience and selling partner experience while lowering Amazon’s cost structure. Vision of RBS is to accelerate Amazon’s flywheel by Improving the customer experience by fixing detail page catalog defects at scale Improving selling partner listing quality to drive GMS and reducing fulfillment defects to drive profitability and We strive to eliminate the root cause of the defect and wherever not possible, we leverage machine learning to find and fix at scale or surface to selling partners. RBS has multiple programs/services aimed at reducing the listing friction, improving listing quality, reducing customer returns and improving star ratings of products that are dependent on selling partner support for effective execution. BASIC QUALIFICATIONS 1+ years of complex Excel VBA macros writing experience Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Knowledge of Python, VBA, Macros, Selenium scripts PREFERRED QUALIFICATIONS Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages Knowledge of NLP & text processing Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
4.0 years
6 - 10 Lacs
Bengaluru
On-site
DESCRIPTION Are you interested to work in a team that positively impacts different key pillars of Amazon like Pricing, Promotions, Advertising, Auto inventory purchasing, Auto inventory removal, Inventory placement? Are you interested in working for a team that builds cool systems yet has great work-life balance? As a Support Engineer, you will build systems that secure and govern our data end to end: control access across multiple storage and access layers (like in-house Applications and BI tools), track data quality, catalogue datasets and their lineage, detect duplication, audit usage and ensure correct data semantics. You will be responsible for crunching and providing support for petabytes of in-coming data from dozens of sources and financial events around the company. Key job responsibilities Provide support of incoming tickets, including extensive troubleshooting tasks, with responsibilities covering multiple products, features and services Work on operations and maintenance driven coding projects Software deployment support in staging and production environments Develop tools to aid operations and maintenance System and Support status reporting Ownership of one or more ETL products or components Customer notification and workflow coordination and follow-up to maintain service level agreements Work with support team for handing-off or taking over active support issues and creating a team specific knowledge base and skill sets About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. BASIC QUALIFICATIONS 4+ years of software development, or 4+ years of technical support experience Experience scripting in modern program languages Experience troubleshooting and debugging technical systems Experience in agile/scrum or related collaborative workflow Experience troubleshooting and documenting findings PREFERRED QUALIFICATIONS Knowledge of distributed applications/enterprise applications Knowledge of UNIX/Linux operating system Experience analyzing and troubleshooting RESTful web API calls Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
3.0 years
4 - 8 Lacs
Chennai
On-site
DESCRIPTION About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. Overview of the role The Business research Analyst will be responsible for data and Machine learning part of continuous improvement projects across compatibility and basket building space. This will require collaboration with local and global teams, which have process and technical expertise. Therefore, RA should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. In compatibility program, RA perform Big data analysis to identify patterns, train model to generate product to product relationship and product to brand & model relationship. RA also continuously improve the ML solution for higher solution accuracy, efficiency and scalability. RA should writes clear and detailed functional specifications based on business requirements as well as writes and reviews business cases. Key job responsibilities Scoping, driving and delivering complex projects across multiple teams. Performs root cause analysis by understand the data need, get data / pull the data and analyze it to form the hypothesis and validate it using data. Conducting a thorough analysis of large datasets to identify patterns, trends, and insights that can inform the development of NLP applications. Developing and implementing machine learning models and deep learning architectures to improve NLP systems. Designing and implementing core NLP tasks such as named entity recognition, classification and part-of-speech tagging. Dive deep to drive product pilots, build and analyze large data sets, and construct problem hypotheses that help steer the product feature roadmap (e.g. with use of Python), tools for database (e.g. SQL, spark) and ML platform (tensorflow, pytorch) Conducting regular code reviews and implementing quality assurance processes to maintain high standards of code quality and performance optimization. Providing technical guidance and mentorship to junior team members and collaborating with external partners to integrate cutting-edge technologies. Find the scalable solution for business problem by executing pilots and build Deterministic and ML model (plug and play on readymade ML models and python skills). Performs supporting research, conduct analysis of the bigger part of the projects and effectively interpret reports to identify opportunities, optimize processes, and implement changes within their part of project. Coordinates design effort between internal team and external team to develop optimal solutions for their part of project for Amazon’s network. Ability to convince and interact with stakeholders at all level either to gather data and information or to execute and implement according to the plan. About the team Amazon.com operates in a virtual, global eCommerce environment without boundaries, and operates a diverse set of businesses in 14 countries, including Retail, third party marketplaces, eCommerce platforms, web services for developers. Retail Business Service (RBS) organization is a core part of leading customer experience and selling partners experience optimization. This team is part of RBS Customer Experience business unit. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The compatibility program handled by this team has a direct impact on customer buying decisions and online user experience. Compatibility program aims to address Customer purchase questions if two products works together, as well as reduce return due to incompatibility. BASIC QUALIFICATIONS Basic Qualifications Ability to analyse and then articulate business issues to a wide range of audiences using strong data, written and verbal communication skills Good mastery of BERT and other NLP frameworks such as GPT-2, XLNet, and Transformer models Experience in NLP techniques such as tokenization, parsing, lexing, named entity recognition, sentiment analysis and spellchecking Strong problem-solving skills, creativity and ability to overcome challenges SQL/ETL, Automation Tools Relevant bachelor’s degree or higher 3+ years combined of relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles Be self motivated and autonomous with an ability to prioritize well, and remain focused when working within a team located in across several countries and time zones PREFERRED QUALIFICATIONS Preferred Qualifications 3+ years combined of relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles Understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service Experience with computer vision algorithms and libraries such as OpenCV, TensorFlow, Caffe or PyTorch. Technical expertise, experience in Data science and ML Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
3.0 - 5.0 years
3 - 5 Lacs
Noida
On-site
Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Growth Strategy Team at Innovaccer Innovaccer is forming a new strategic advisory team that will support healthcare organizations to better understand their opportunities and levers for maximizing outcomes, particularly in, but not limited to, value-based care arrangements and population health initiatives. This role requires a “full stack” approach to analytics, covering all parts of the analytics value chain, including data ETL and manipulation, analysis, reporting, visualizations, insights, and final deliverable creation. The ideal candidate will possess a player / coach mentality as this team matures, with the willingness and ability to roll up their sleeves and contribute in the early days and transition to growing in responsibility as we scale. This candidate will be comfortable diving into both structured and unstructured data, creating robust financial models and business cases, producing compelling visualizations and collateral, and leading the narrative on data-driven storytelling. About the Role We are looking for a Senior Manager -Advisory Services, a key role within the Advisory Services team at Innovaccer. This individual will be responsible for delivering key customer analytics (e.g. ROI models), performance analytics and slide presentations to support multiple client pursuits and engagements. The ideal candidate has a strong desire to learn about the US healthcare system, is organized and structured, has excellent written and verbal communication skills and is a fast learner. The role requires both analytical skills and creativity to articulate and communicate complex messages about healthcare and technology to a wide-ranging audience. You will be aligned with a Managing Director/Director in the US who will provide you direction on day to day work and help you learn about the company and the industry. A Day in the Life Under direction of Advisory Services leaders, engage with prospect organizations on intended business outcomes and request data assets to model potential scenarios. Own, digest, and interpret data from a variety of forms, aggregated metrics in spreadsheets to unstructured formats to raw, transactional forms like medical claims. Own and execute the entire analytics lifecycle, leveraging data in all its available forms to produce cogent and compelling business cases, financial models, presentations, and other executive-ready final deliverables. Synthesize insights to inform strategic direction, roadmap creation, and opportunities. Couple Innovaccer's technology platform-including data, software and workflow applications, analytics, and AI-with identified insights and opportunities to create prescriptive recommendations that maximize value creation and outcomes. Develop findings and insights for senior leadership of prospects and clients and Innovaccer stakeholders in a clear and compelling manner. Stay up-to-date with the latest analytics technologies and methodologies to enhance capabilities. Build compelling presentations including client sales and engagement delivery decks, case studies, talk tracks, and visuals. Research and analyze high priority strategic clients, industry best practices and market intelligence, including industry mapping, customer profiling, competitive insights and deep dives into select solution opportunities Co-develop and maintain standardized value lever framework, segment-based pitch decks and customer case studies for use across multiple advisory pursuits and engagements Provide analytics thought partnership and data support on the design, execution, and measurement of impactful advisory services strategy initiatives Collaborate across Advisory Services, Growth Strategy, Marketing, Sales, Product, and Customer Success teams and business leaders to address business questions that can be answered effectively through data-driven modeling and insights Develop slide presentations for quarterly and annual reporting presentations Structure, manage, and write responses to RFPs What You Need Degree from a Tier 1 college with relevant degrees in Finance, Economics, Statistics, Business, or Marketing. 3-5 years of professional experience, including experience in management consulting and/or Go To Market in a technology/ software/SAAS company Strong technical aptitude, fantastic storytelling skills, with a great track record of working across sales, marketing, and technology teams Ability to identify, source, and include data elements to drive analytical models and outputs. Experience creating Excel models (identify inputs, key considerations/variables, relevant outputs) and PowerPoint presentations Familiarity with leveraging AI tools (e.g., generative AI, AI-enhanced research tools, AI-based data analysis platforms) to enhance productivity, accelerate research, generate insights, and support creative problem-solving. Proactive, decisive, independent thinker and good at problem solving and conducting industry research Experience making slide presentations for internal and external audiences that articulate key takeaways Creative problem solver with the ability to back up ideas with requisite fact-based arguments Comfortable working with multiple data sources in both structured data and unstructured formats to frame a business opportunity and develop a structured path forward Strong proficiency in Excel and PowerPoint or G-Suite Willing to work in a fast-paced environment under tight deadlines Strong written and verbal communication skills, as well as the ability to manage cross-functional stakeholders Experience with analytics and financial modeling US Healthcare experience and/or a strong willingness and interest to learn this space. Specific areas of interest include: Understanding of payer/provider / patient dynamics Provider data strategy and architecture Provider advanced analytics, AI, NLP Patient experience and engagement Population Health and Care Management Utilization and cost management Risk and Quality Management Population Health Management Risk models Value-Based Care Social Determinants of Health We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France