Jobs
Interviews

3632 Redshift Jobs - Page 45

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Greetings from Teknikoz Experience : 5+ Years Summary : We are seeking an experienced AWS Data Engineer to join our TC - Data and AIoT department. As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines and infrastructure on the AWS platform. You will work closely with cross-functional teams to ensure efficient data flow and integration, enabling effective data analysis and reporting. Responsibilities : Design and develop data pipelines and ETL processes on the AWS platform, ensuring scalability, reliability, and performance. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions. Implement data governance and security measures to ensure compliance with industry standards and regulations. Optimize data storage and retrieval processes to enhance data accessibility and performance. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Monitor and maintain data pipelines, ensuring timely data ingestion and processing. Stay up-to-date with the latest AWS services and technologies, and evaluate their potential for enhancing our data infrastructure. Collaborate with DevOps teams to automate deployment and monitoring of data pipelines. Document technical specifications, processes, and procedures related to data engineering. Qualifications : Bachelor's degree in Computer Science, Engineering, or a related field. 3-5 years of experience in data engineering, with a focus on AWS technologies. Strong knowledge of AWS services such as S3, Glue, Redshift, Athena, and Lambda. Proficiency in programming languages such as Python, SQL, and Scala. Experience with data modeling, data warehousing, and ETL processes. Familiarity with data governance and security best practices. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. AWS certifications (e.g., AWS Certified Big Data - Specialty) are a plus.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) Are you customer obsessed, flexible, smart and analytical, strategic yet execution focused, hungry and passionate about e-commerce, experienced, and entrepreneurial leader with a strong work ethic? If yes, this opportunity will appeal to you. The ideal candidate will be enthusiastic about managing challenging, lengthy projects across multiple teams and locations. We are looking for a Business Analyst who shares Amazon's passion for the customer—someone who understands the Engineering and Business both. This role requires an individual with excellent understanding of SQL and query development, good business acumen and the ability to work with business, program and product teams. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail, an ability to work in a fast-paced and ever-changing environment, and driven by a desire to innovate in this space. To be successful in this role you should have superior communication, presentation and organizational skills. Operating in a fast-moving and sometimes ambiguous environment you will work autonomously taking control and responsibility for achieving the objectives of the role. This role provides opportunities to develop original ideas, approaches, and solutions in a competitive and ever changing business climate. You will also partner with product management teams as well to identify long term features and programs to improve the seller and customer experience on the Amazon platform. Key job responsibilities 1. Data analysis by writing ETL queries in SQL/Datanet platforms 2. Highly proficient with MS Excel 3. Enabling effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format 4. Analyzing and solving business problems with focus on understanding root causes and driving forward-looking opportunities 5. Designing new metrics and enhance existing metrics to support the future state of business processes and ensure sustainability 6. Understanding tools and processes and driving seller adoption 7. Communicating complex analysis and insights to stakeholders and business leaders, both verbally and in writing. Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Rest of World (ROW) Transportation Execution team in Hyderabad is looking for an innovative, hands-on and customer-obsessed BIE for its Analytics function. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0151765 Date posted 07/07/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity As a Principal Data Engineer you will Provide leadership and technical expertise to the analysis, definition, design and delivery of large, structured or unstructured data across different domains. You will prototype, maintain and create data-sets and data architectures. Responsibilities Manages and influences technical analysis, design, development maintenance and configuration of complex and non-routine data and leads the analysis, definition and design of data and the data architectures, including within R&D. Uses specialized in-depth knowledge of advanced data stores and analysis technologies, consults with data specialists and researchers to define datasets, analyze those within team priorities to support research and finding signals for targets. Contributes to and takes responsibility for creating future state roadmaps for complex data architectures, data explorations, data analyzes and data modelling within a complex Biology, Omics, Chemistry, Competitive Intelligence, Statistical and other relevant domains. Makes recommendations for data architectures, data analyzing methodologies and technology by using deep understanding of data industry trends, data analyzing possibilities, data roadmaps and strategic data plans. Guides decisions with projects and other IT groups by using persuasion and negotiation skills to reach agreement on approach and implementation. Oversees the impact of Medical and Biological data/datasets requests to support different research and leads data investigations. Leads a team of data engineers to support Life Science Research Data Initiatives, choosing the appropriate technologies and developing advanced architectures for the largest data problems, including for R&D Demonstrates advanced tooling and techniques to other engineers and traditional analytics organizations throughout the company Represent the team while working on project across domains and commercial. Is internal and external expert to-go-to in how to drive advanced Computer Science and Engineering skills and techniques Provides expertise to data engineers and peers and specialists, to support research Skills and Qualifications Required More than 10+ years experience in Data and Analytics domain AWS overview – solid knowledge about AWS ecosystem, experience with Lambda, S3, AWS notification systems, AWS SDKs , Athena , Redshift,AWS Secrets Expert Data engineering experience – building scalable and performant ETL data pipelines by using Spark, experience with data extraction/ingestion from relational databases as well as flat files and by pulling via API, data transformation and cleaning Spark job orchestration through Airflow Experience with pub-sub streaming and messaging - messaging queue systems, e.g. AWS SQS StrongPysparkand Pythonprogramming skills Solid bash scripting skills Adequate knowledge of Agile processes, CI/CD tools and setup including automated unit testing, code linting, quality tools Preferred skills: Experience with leading a technical team as senior engineer – guidance,design and code reviews, communication with other technical team. Experience with Databricks and its functionalities (Autoloader, APIs, scheduler) and Glue, Broader AWS knowledge, mostly in data and ML/DL area, e.g. Amazon Sagemaker, AWS RDS, AWS Step Functions Python skills Knowledge of GxP processes and documentation. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build and manage the largest petabyte-scale data infrastructure in India for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in being a strong owner of our Data Platform. He/she will own and build data pipelines, automations and solutions to ensure the availability, system efficiency, IMR efficiency, scaling, expansion, operations and compliance of the data platform that serves 200 + IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in managing large data infrastructures, working with petabyte-scale data, building scalable data solutions/automations and driving operational excellence. An ideal candidate will be someone who is a self-starter that can start with a Platform requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and ‘gets work done’ in business time. Key job responsibilities 1. Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of Amazon IN. 2. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency, IMR efficiency, data availability, consistency & compliance. 3. Enable efficient data exploration, experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets 4. Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform 5. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies 6. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. 7. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations 8. Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

DESCRIPTION Are you customer obsessed, flexible, smart and analytical, strategic yet execution focused and passionate about e-commerce? Are you an experienced, entrepreneurial leader with a strong work ethic? If yes, this opportunity will appeal to you. IN Consumer BI Reporting and Analytics (COBRA) team is looking for a highly driven, customer-obsessed Business Intelligence Engineer who will be responsible for building BI platform & team and supporting key decision making across the group. You’ll analyze large amounts of data, discover and solve real world problems, build metrics and business cases around key projects and, most of all, be an integral part of creating a better customer and seller experience. We are looking for customer obsessed, data driven entrepreneurs to join our growing team. Solve some of the hardest problems for our customers and Sellers. If you want operate at start up speed, solve some of the hardest problems and build a service which customers love, Amazon.in might just be the place for you. The Business Intelligence Engineer is responsible for driving deep insights about Amazon Business and driving continuous improvement using the analysis. The person should have a detailed understanding of a business requirement or the ability to quickly get to the root cause of a particular business issue, and draft solutions to meet requirements or resolve the root problems. The BIE will create pipelines for reports to analyze data, make sense of the results and be able to explain what it all means to key stakeholders. This individual will analyze large amounts of data, discover and solve real world problems and build metrics and business cases around key performance of the P3P programs. The ideal candidate will use a customer backwards approach in deriving insights and identifying actions we can take to improve the customer experience and conversion for the program. Key job responsibilities Develop and streamline necessary dashboards and one-off analyses, providing ability to surface business-critical KPIs, monitor the health of metrics and effectively communicate performance. Partner with stakeholders and other Business Intelligence teams to acquire necessary data for robust analysis. Convert data into insights including implications and recommendations that are specific and actionable for the P3P team and across the business. Partner with other analysts as well as data engineering and technology teams to support building a best-in-class dashboards and data infrastructure. Communicate insights using data visualization and presentations to stakeholders The successful candidate will be an expert with analyzing large data sets and have exemplary communication skills. The candidate will need to be a self-starter, very comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail. BASIC QUALIFICATIONS 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) PREFERRED QUALIFICATIONS Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Data Architect We are seeking an experienced and forward-thinking Data Architect with deep expertise in Big Data ecosystems to design and lead scalable data architecture solutions. The ideal candidate will be responsible for designing, building, and optimizing data pipelines, storage frameworks, and architecture patterns to support advanced analytics, machine learning, and business intelligence initiatives. Key Responsibilities Lead the design and development of enterprise-grade Big Data architectures using tools such as Hadoop, Spark, Kafka, Hive, etc. Architect scalable data lakes, data warehouses, and streaming platforms to support structured and unstructured data ingestion. Collaborate with business, data science, and engineering teams to understand requirements and translate them into robust architecture designs. Define data modeling standards, metadata management, and governance frameworks. Ensure data quality, lineage, and integrity across data systems. Drive best practices for performance, reliability, and security of large-scale data processing pipelines. Evaluate emerging technologies and make recommendations for adoption in alignment with business goals. Provide technical leadership and mentorship to data engineers and developers. Establish and enforce architectural standards, design patterns, and documentation. Required Skills & Experience 8+ years of overall experience in data architecture, data engineering, or a similar role. Strong expertise in Big Data tools. Deep experience with data modeling, ETL frameworks, and distributed data processing. Hands-on knowledge of cloud platforms (AWS, GCP, Azure) and tools like Amazon Redshift, BigQuery, Snowflake, or Databricks. Proficiency in SQL, and scripting languages such as Python, Scala, or Java. Experience in designing real-time data streaming and batch processing pipelines. Knowledge of data governance, security, and compliance best practices. Excellent problem-solving skills and ability to communicate complex technical concepts to non-technical stakeholders. Preferred Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field. Certifications in Big Data technologies or cloud platforms (e.g., AWS Certified Data Analytics, GCP Professional Data Engineer). Experience working with BI tools (Tableau, Power BI, Looker) and supporting data visualization requirements. Exposure to ML pipeline integration is a plus (ref:hirist.tech)

Posted 3 weeks ago

Apply

410.0 years

0 Lacs

Panchkula, Haryana, India

On-site

Position title Sr. ETL Engineer Location Panchkula, India Date posted July 4, 2025 Description We are looking for a skilled and experienced ETL Engineer to join our growing team at Grazitti Interactive. In this role, you will be responsible for building and managing scalable data pipelines across traditional and cloud-based platforms. You will work with structured and unstructured data sources, leveraging tools such as SQL Server, Snowflake, Redshift, and BigQuery to deliver high-quality data solutions. If you have hands-on experience in Python, PySpark, and cloud platforms like AWS or GCP, along with a passion for transforming data into insights, wed love to connect with you. Skills Key Skills Strong experience (410 years) in ETL development using platforms like SQL Server, Oracle, and cloud environments like Amazon S3, Snowflake, Redshift, Data Lake, and Google BigQuery. Proficient in Python, with hands-on experience creating data pipelines using APIs. Solid working knowledge of PySpark for large-scale data processing. Ability to output results in various formats, including JSON, data feeds, and reports. Skilled in data manipulation, schema design, and transforming data across diverse sources. Strong understanding of core AWS/Google Cloud Services and basic cloud architecture. Capable of developing, deploying, and debugging cloud-based data assets. Expert-level proficiency in SQL with a solid grasp of relational and cloud-based databases. Excellent ability to understand and adapt to evolving business requirements. Strong communication and collaboration skills, with experience in onsite/offshore delivery models. Familiarity with Marketo, Salesforce, Google Analytics, and Adobe Analytics. Working knowledge of Tableau and Power BI for data visualization and reporting. Responsibilities Roles and Responsibilities Design and implement robust ETL processes to ensure data integrity and accuracy across systems. Develop reusable data solutions and optimize performance across traditional and cloud environments. Collaborate with cross-functional teams, including data analysts, marketers, and engineers, to define data requirements and deliver insights. Take ownership of end-to-end data pipelines, from requirement gathering to deployment and monitoring. Ensure compliance with internal QMS and ISMS standards. Proactively report any data incidents or concerns to reporting managers. Application Form Position: Sr. ETL Engineer Name * E-mail * Phone * CV & Documents * Add file Required fields Thank you for submitting your application. We will contact you shortly! Contacts Email: careers@grazitti.com Address HSIIDC Technology Park, Plot No 19, Sector 22, 134104, Panchkula, Haryana, India

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Exp: 4+ yrs Location: Pune Immediate to 15 Days About the Role Job Description: Responsibilities Bachelor's degree in a quantitative field such as Data Science, Statistics, Mathematics, Computer Science, or a related discipline. Data visualization best practices. Proven experience in developing advanced dashboards and performing data analysis. Ability to create clear, intuitive, and impactful visualizations (charts, graphs, tables, KPIs) that effectively communicate insights. Extensive experience with AWS QuickSight (or similar BI tool): Hands-on experience in building, publishing, and maintaining interactive dashboards and reports. QuickSight data sources: Experience connecting QuickSight to various data sources, especially those common in AWS environments (e.g., S3, Redshift, Athena, RDS, Glue). QuickSight dataset creation and management: Proficiency in creating, transforming, and optimizing datasets within QuickSight, including calculated fields, parameters, and filters. Performance optimization: Knowledge of how to optimize QuickSight dashboards and data for speed and scalability. Qualifications Bachelor's degree in a quantitative field such as Data Science, Statistics, Mathematics, Computer Science, or a related discipline. Required Skills Data visualization best practices. Proven experience in developing advanced dashboards and performing data analysis. Ability to create clear, intuitive, and impactful visualizations (charts, graphs, tables, KPIs) that effectively communicate insights. Extensive experience with AWS QuickSight (or similar BI tool). Hands-on experience in building, publishing, and maintaining interactive dashboards and reports. Experience connecting QuickSight to various data sources. Proficiency in creating, transforming, and optimizing datasets within QuickSight. Knowledge of how to optimize QuickSight dashboards and data for speed and scalability. Preferred Skills Experience with AWS environments (e.g., S3, Redshift, Athena, RDS, Glue).

Posted 3 weeks ago

Apply

5.0 years

6 - 13 Lacs

India

On-site

Job Title: Data Warehousing Specialist Experience: Minimum 5 Years Location: Kochi, Kerala Job Type: Full-Time Job Summary: We are looking for a skilled Data Warehousing Specialist with at least 5 years of experience to join our growing IT team in Kochi. The ideal candidate should have strong expertise in SQL , MS SQL Server , and MongoDB , and be able to design, build, and maintain robust data warehouse solutions that support business intelligence and analytics needs. Key Responsibilities: Design, develop, and maintain data warehouse solutions Develop ETL processes and data integration pipelines Optimize database performance and ensure data integrity Collaborate with data analysts and business teams to understand data needs Work with both structured (SQL, MS SQL) and unstructured data (MongoDB) Ensure security and compliance standards for data storage and handling Requirements: Minimum 5 years of experience in data warehousing and database management Strong hands-on experience with SQL , MS SQL Server , and MongoDB Solid understanding of data modeling, schema design, and normalization Experience with ETL tools and data integration frameworks Ability to troubleshoot and optimize complex queries Good communication skills and a problem-solving mindset Preferred Qualifications: Experience with cloud-based data warehousing solutions (e.g., AWS Redshift, Snowflake) Exposure to BI tools (e.g., Power BI, Tableau) is a plus Work Location: Kochi, Kerala Employment Type: Full-Time, Hybrid Mode Job Type: Full-time Pay: ₹55,000.00 - ₹115,000.00 per month Benefits: Health insurance Paid sick time Schedule: Monday to Friday Weekend availability Supplemental Pay: Performance bonus Education: Bachelor's (Required) Experience: Data warehouse: 5 years (Required) Language: English (Required) Work Location: In person Application Deadline: 15/08/2025

Posted 3 weeks ago

Apply

0 years

2 - 7 Lacs

Pune

On-site

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Join us at Syensqo, where our IT team is gearing up to enhance its capabilities. We play a crucial role in the group's transformation—accelerating growth, reshaping progress, and creating sustainable shared value. IT team is making operational adjustments to supercharge value across the entire organization. Here at Syensqo, we're one strong team! Our commitment to accountability drives us as we work hard to deliver value for our customers and stakeholders. In our dynamic and collaborative work environment, we add a touch of enjoyment while staying true to our motto: reinvent progress. Come be part of our transformation journey and contribute to the change as a future team member. We are looking for: As a Data/ML Engineer, you will play a central role in defining, implementing, and maintaining cloud governance frameworks across the organization. You will collaborate with cross-functional teams to ensure secure, compliant, and efficient use of cloud resources for data and machine learning workloads. Your expertise in full-stack automation, DevOps practices, and Infrastructure as Code (IaC) will drive the standardization and scalability of our cloud-based data and ML platforms. Key requirements are: Ensuring cloud data governance Define and maintain central cloud governance policies, standards, and best practices for data, AI and ML workloads Ensure compliance with security, privacy, and regulatory requirements across all cloud environments Monitor and optimize cloud resource usage, cost, and performance for data, AI and ML workloads Design and Implement Data Pipelines Co-develop, co-construct, test, and maintain highly scalable and reliable data architectures, including ETL processes, data warehouses, and data lakes with the Data Platform Team Build and Deploy ML Systems Co-design, co-develop, and deploy machine learning models and associated services into production environments, ensuring performance, reliability, and scalability Infrastructure Management Manage and optimize cloud-based infrastructure (e.g., AWS, Azure, GCP) for data storage, processing, and ML model serving Collaboration Work collaboratively with data scientists, ML engineers, security and business stakeholders to align cloud governance with organizational needs Provide guidance and support to teams on cloud architecture, data management, and ML operations. Work collaboratively with other teams to transition prototypes and experimental models into robust, production-ready solutions Data Governance and Quality: Implement best practices for data governance, data quality, and data security to ensure the integrity and reliability of our data assets. Performance and Optimisation: Identify and implement performance improvements for data pipelines and ML models, optimizing for speed, cost-efficiency, and resource utilization. Monitoring and Alerting Establish and maintain monitoring, logging, and alerting systems for data pipelines and ML models to proactively identify and resolve issues Tooling and Automation Design and implement full-stack automation for data pipelines, ML workflows, and cloud infrastructure Build and manage cloud infrastructure using IaC tools (e.g., Terraform, CloudFormation) Develop and maintain CI/CD pipelines for data and ML projects Promote DevOps culture and best practices within the organization Develop and maintain tools and automation scripts to streamline data operations, model training, and deployment processes Stay Current on new ML / AI trends: Keep abreast of the latest advancements in data engineering, machine learning, and cloud technologies, evaluating and recommending new tools and approach Document processes, architectures, and standards for knowledge sharing and onboarding Education and experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. (Relevant work experience may be considered in lieu of a degree). Programming: Strong proficiency in Python (essential) and experience with other relevant languages like Java, Scala, or Go. Data Warehousing/Databases: Solid understanding and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) is highly desirable. Big Data Technologies: Hands-on experience with big data processing frameworks (e.g., Spark, Flink, Hadoop). Cloud Platforms: Experience with at least one major cloud provider (AWS, Azure, or GCP) and their relevant data and ML services (e.g., S3, EC2, Lambda, EMR, SageMaker, Dataflow, BigQuery, Azure Data Factory, Azure ML). ML Concepts: Fundamental understanding of machine learning concepts, algorithms, and workflows. MLOps Principles: Familiarity with MLOps principles and practices for deploying, monitoring, and managing ML models in production. Version Control: Proficiency with Git and collaborative development workflows. Problem-Solving: Excellent analytical and problem-solving skills with a strong attention to detail. Communication: Strong communication skills, able to articulate complex technical concepts to both technical and non-technical stakeholders. Bonus Points (Highly Desirable Skills & Experience): Experience with containerisation technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines for data and ML deployments. Experience with stream processing technologies (e.g., Kafka, Kinesis). Knowledge of data visualization tools (e.g., Tableau, Power BI, Looker). Contributions to open-source projects or a strong portfolio of personal projects. Experience with [specific domain knowledge relevant to your company, e.g., financial data, healthcare data, e-commerce data]. Language skills Fluent English What’s in it for the candidate Be part of a highly motivated team of explorers Help make a difference and thrive in Cloud and AI technology Chart your own course and build a fantastic career Have fun and enjoy life with an industry leading remuneration pack About us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply.

Posted 3 weeks ago

Apply

3.0 years

5 - 6 Lacs

Pune

On-site

COMPANY OVERVIEW Domo is a native cloud-native data experiences innovator that puts data to work for everyone. Underpinned by AI, data science, and a secure data foundation, our platform makes data actionable with user-friendly dashboards and apps. With Domo, companies get intuitive, agile data experiences that power exponential business impact. POSITION SUMMARY Our Technical Support team is looking for problem solvers with executive presence and polish—highly versatile, reliable, self-starting individuals with deep technical troubleshooting skills and experience. You will help Domo clients facilitate their digital transformation and strategic initiatives and increase brand loyalty and referenceability through world-class technical support. When our customers succeed, we succeed. The Technical Support team is staffed 24/7, which allows our global customers to contact us at their convenience. Support Team members build strong, lasting relationships with customers by understanding their needs and concerns. This team takes the lead in providing a world-class experience for every person who contacts Domo through our Support Team. KEY RESPONSIBILITIES Provide exceptional service by connecting, solving, and building relationships with our customers. Interactions may include case work such as telephone, email, Zoom, in person, or other internal tools, as needed and determined by the business Thinking outside the box, our advisors are offered a high degree of latitude to find and develop solutions. Successful candidates will demonstrate independent thinking that consistently leads to robust and scalable solutions for our customers; Perpetually expand your knowledge of Domo’s platform, Business Intelligence, data, and analytics. On-the-job training, time for side projects, and Domo certification; Provide timely (SLAs), constant, and ongoing communication with your peers and customers regarding their support cases until those cases are solved. JOB REQUIREMENTS Essential: Bachelor's degree in a technical field (computer science, mathematics, statistics, analytics, etc.) or 3-5 years related experience in a relevant field. Show us that you know how to learn, find answers, and develop solutions on your own. At least 2 years of experience in a support role ideally in a customer facing environment. Communicate clearly and effectively with customers to fully meet their needs. You will be working with experts in their field; quickly establishing rapport and trust with them is critical. Strong SQL experience is a must. From memory, can you explain the basic purpose and SQL syntax behind joins, unions, selects, grouping, aggregation, indexes, subqueries, etc. Software application support experience. Preference given for SaaS, analytics, data, and Business Intelligence fields. Tell us about your experience working methodically through queues, following through on commitments, SOP’s, company policies, professional communication etiquette through verbal and written correspondence. Flexible and adaptable to rapid change. This is a fast-paced industry and there will always be something new to learn. Desired: APIs - REST/SOAP, endpoints, uses, authentication, methods, Postman; Programming languages - Python, JavaScript, Java, etc. Relational databases - MySQL, PostgreSQL, MSSQL, Redshift, Oracle, ODBC, OLE DB, JDBC Statistical computing - R, Jupyter JSON/XML – Reading, parsing, XPath, etc. SSO/IDP – OpenID Connect, SAML, Okta, Azure AD, Ping Identity Snowflake Data Cloud / ETL. LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical cash allowance provided Maternity and Paternity Leave policy Baby bucks: cash allowance to spend on anything for every newborn or child adopted Haute Mama: cash allowance to spend on Maternity Wardrobe (only for women employees) 18 days paid time off + 10 holidays + 12 medical leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Domo is an equal opportunity employer. #LI-TU1 #LI-Hybrid

Posted 3 weeks ago

Apply

5.0 years

5 - 8 Lacs

Bengaluru

On-site

Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand. Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries (e.g. supply chain management, banking/insurance, CPG, retail, etc.). It is a fast-paced, innovative and exciting environment. Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source. We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark, snowpark), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering. 3 to 5 years of experience in Data engineering. Skills/Project Experience - Required : 2+ years of software development or data engineering experience in Python (preferred), Spark (preferred), snowpark, snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with cloud technologies (GCP, Azure, AWS, etc.) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in data science & machine learning Experience with backend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a product engineering environment Experience with data warehouses ( BigQuery, Redshift etc.) Location: Hyderabad/Bengaluru/Gurgaon/Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301

Posted 3 weeks ago

Apply

0 years

5 - 8 Lacs

Bengaluru

On-site

Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications and Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements :Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements :Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Have you ever ordered a product on Amazon and when that box with the smile arrived, wondered how it got to you so fast? Wondered where it came from and how much it cost Amazon? If so, Amazon’s Supply Chain Optimization Technology (SCOT) organization is for you. At SCOT, we solve deep technical problems and build innovative solutions in a fast-paced environment working with smart & passionate team members. (Learn more about SCOT: http://bit.ly/amazon-scot) Our vision is to ensure Amazon Customers have the best experience on Amazon, throughout the year, and do not have to compromise with a less than optimal experience during High traffic / Deal events. SCOT Team is seeking highly motivated individuals with exceptional data analytics skills and a passion for tackling intricate challenges. In this role, you will utilize your expertise to inform impactful business decisions that enhance customer experience and contribute to long-term free cash flow growth. You will gain a comprehensive understanding of Amazon's systems and supply chain processes through collaboration with diverse teams across product, science, tech, retail categories, finance, and operations. This role will require partnering closely with Product Managers across SCOT to segment our key Customer Experience and Supply Chain metrics, such as SoROOS and Local-In-Stock through the year, identify key opportunities to improve our system and process, to deliver Best-At-Amazon experiences for Customers throughout the year. Key job responsibilities Analyze and synthesize large data streams across multiple systems/inputs. Work with Product Managers to understand customer behaviors, spot system defects, and benchmark our ability to serve our customers, improving a wide range of internal products that impact inventory availability for customers both nationally and regionally Develop business insights basis data extraction, data analytics, trend deduction & Pattern recognition. Present these business insights to senior management/executives. Create advanced dashboard to help a large group of teams to consume insights make changes to business process and track progress. Build analytical models that can help improve business outcomes at scale enhancing current system abilities. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2976878

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Spaulding Ridge is an advisory and IT implementation firm. We help global organizations get financial clarity into the complex, daily sales, and operational decisions that impact profitable revenue generations, efficient operational performance, and reliable financial management. At Spaulding Ridge, we believe all business is personal. Core to our values is our relationships with our clients, our business partners, our team, and the global community. Our employees dedicate their time to helping our clients transform their business, from strategy through implementation and business transformation. What You Will Do And Learn As a Snowflake Architect/ Manager in Data Solutions, you’ll be responsible for designing, implementing, and testing proposed modern analytic solutions. Working closely with our client partners and architects, you’ll develop relationships with key technical resources while delivering tangible business outcomes. Manage the Data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance Collaborate with team members to design and implement technology that aligns with client business objectives Build proof of concepts for a modern analytics stack supporting a variety of Cloud-based Business Systems for potential clients Team management experience and ability to manage, mentor and develop talent of assigned junior resources Create actionable recommendations based on identified platform, structural and/or logic problems Communicate and demonstrate a clear understanding of client business needs, goals, and objectives Collaborate with other architects on solution designs and recommendations. Qualifications: 8+ years’ experience developing industry leading business intelligence and analytic solutions Must have thorough knowledge of data warehouse concepts and dimensional modelling Must have experience in writing advanced SQL Must have at least 5+ years of hands-on experience on DBT (Data Build Tool). Mandatory to have most recent hands-on experience on DBT. Must have experience working with DBT on one or more of the modern databases like Snowflake / Amazon Redshift / BigQuery / Databricks / etc. Hands-on experience with Snowflake would carry higher weightage Snowflake SnowPro Core certification would carry higher weightage Experience working in AWS, Azure, GCP or similar cloud data platform would be an added advantage Hands-on experience on Azure would carry higher weightage Must have experience in setting up DBT projects Must have experience in understanding / creating / modifying & optimizing YML files within DBT Must have experience in implementing and managing data models using DBT, ensuring efficient and scalable data transformations Must have experience with various materialization techniques within DBT Must have experience in writing & executing DBT Test cases Must have experience in setting up DBT environments Must have experience in setting up DBT Jobs Must have experience with writing DBT Jinja and Macros Must have experience in creating DBT Snapshots Must have experience in creating & managing incremental models using DBT Must have experience with DBT Docs Should have a good understanding of DBT Seeds Must have experience with DBT Deployment Must Experience with architecting data pipelines using DBT, utilizing advanced DBT features Proficiency in version control systems and CI/CD Must have hands-on experience configuring DBT with one or more version control systems like Azure DevOps / Github / Gitlab / etc. Must have experience in PR approval workflow Participate in code reviews and best practices for SQL and DBT development Experience working with visualization tools such as Tableau, PowerBI, Looker and other similar analytic tools would be an added advantage 2+ years of Business Data Analyst experience 2+ years of experience writing Business requirements, Use cases and/or user stories, for data warehouse or data mart initiatives. Understanding and experience on ETL/ELT is an added advantage 2+ years of consulting experience working on project-based delivery using Software Development Life Cycle (SDLC) 2+ years of years of experience with relational databases (Postgres, MySQL, SQL Server, Oracle, Teradata etc.) 2+ years of experience creating functional test cases and supporting user acceptance testing 2+ years of experience in Agile/Kanban/DevOps Delivery Outstanding analytical, communication, and interpersonal skillsAbility to manage projects and teams against planned work Responsible for managing the day-to-day client relationship on projects Spaulding Ridge’s Commitment to an Inclusive Workplace When we engage the expertise, insights, and creativity of people from all walks of life, we become a better organization, we deliver superior services to clients, and we transform our communities and world for the better. At Spaulding Ridge, we believe our team should reflect the rich diversity of society and we take seriously the responsibility to cultivate a workplace where every bandmate feels accepted, respected, and valued for who they are. We do this by creating a culture of trust and belonging, through practices and policies that support inclusion, and through our employee led Employee Resource Groups (ERGs): CRE (Cultural Race and Ethnicity), Women Elevate, PROUD and Mental Wellness Alliance. The company is committed to offering Equal Employment Opportunity and to providing reasonable accommodation to applicants with physical and/or mental disabilities. If you are interested in applying for employment with Spaulding Ridge and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to our VP of Human Resources, Cara Halladay (challaday@spauldingridge.com). Requests for reasonable accommodation will be considered on a case-by-case basis. Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, gender, sexual orientation, gender identity, protected veteran status or disability.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Who We Are Motive empowers the people who run physical operations with tools to make their work safer, more productive, and more profitable. For the first time ever, safety, operations and finance teams can manage their drivers, vehicles, equipment, and fleet related spend in a single system. Combined with industry leading AI, the Motive platform gives you complete visibility and control, and significantly reduces manual workloads by automating and simplifying tasks. Motive serves more than 100,000 customers – from Fortune 500 enterprises to small businesses – across a wide range of industries, including transportation and logistics, construction, energy, field service, manufacturing, agriculture, food and beverage, retail, and the public sector. Visit gomotive.com to learn more. About The Role As a Database Engineer at Motive, you will ensure that our databases are performant, reliable, scalable, and automated. While also acting as a DBA overseeing more than 100 databases, almost all of which are Postgres, you will also have a hand in the success of Motive’s expanded use of AWS managed data layer services (such as DynamoDB, Elasticsearch, and Redshift, etc.). Partnering with product team engineers across a great variety of use cases, you will analyze the performance of databases and systems to provide optimizations and partner with product engineers on query tuning to help us keep scaling safely. You will also help run zero-downtime upgrades and maintenance as Motive reaches 99.99% SLAs, helped in large part by the excellence of our Database Engineering team’s focus on reliability. Responsibilities Design and implement high availability database architecture in AWS Partner with developers and SREs to build and automate the provisioning of new infrastructure Continuously monitor and improve database performance via query and index tuning, schema updates, partitioning, etc. Collaborate with developers to tune and optimize database queries Work with our Data and ML teams on Kafka and Snowflake integrations and data flows from our main DBs Database Administrator tasks and maintenance ensuring the health of database tables Plan and execute disaster recovery scenarios Build dashboards for database health and alerting Perform Terraform deployments of database instances, users, groups, and permissions Manage database upgrades and migrations with minimal to zero-downtimeTake on-call duty to respond to production incidents Requirements B.S. or M.S. in Computer Science or a related field, or equivalent work experience Overall 5+ years of experience and 3+ years working with PostgreSQL Experience building and maintaining mission-critical production PostgreSQL databases Solid understanding of PostgreSQL database architecture (locking, consistency, transaction logging, etc) Experience in AWS managed services experience (Aurora, etc.) Experience with high-availability, backup and recovery solutions and strategies Advanced knowledge of query tuning and optimization techniques Experience provisioning PostgreSQL databases in AWS with tools like Terraform, CloudFormation, Ansible Experience with monitoring and logging tools like Datadog, New Relic, Sumo Logic Experience with other databases like DynamoDB, Redshift, Elasticsearch, Snowflake is a plus Creating a diverse and inclusive workplace is one of Motive's core values. We are an equal opportunity employer and welcome people of different backgrounds, experiences, abilities and perspectives. Please review our Candidate Privacy Notice here . UK Candidate Privacy Notice here. The applicant must be authorized to receive and access those commodities and technologies controlled under U.S. Export Administration Regulations. It is Motive's policy to require that employees be authorized to receive access to Motive products and technology.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description- ML Engineer: Strong experience of at-least 2-3 years in Python. 2 + years’ experience of working on feature/data pipelines and feature stores using Py-Spark. Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice. Knowledge on Docker and Kubernetes. Experience with orchestration/scheduling tools like Argo. Experience building and consuming data from REST APIs. Demonstrable ability to think outside of the box and not be dependent on readily available tools. Excellent communication, presentation and interpersonal skills are a must. Py-Spark AWS Engineer: Good hands-on experience of python and Bash Scripts. 4+ years of good hands-on exposure with Big Data technologies – Pyspark (Data frame and Spark SQL), Hadoop, and Hive Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. Glue, EMR, RedShift, S3, Kinesis) Ability to write Glue jobs and utilise the different core functionalities of Glue. Good understanding of SQL and data warehouse tools like (Redshift). Experience with orchestration/scheduling tools like Airflow. Strong analytical, problem-solving, data analysis and research skills. Demonstrable ability to think outside of the box and not be dependent on readily available tools. Excellent communication, presentation and interpersonal skills are a must. Roles & Responsibilities- Collaborate with data engineers & architects to implement and deploy scalable solutions. Provide technical guidance and code review of the deliverables. Play active role in estimation and planning. Communicate results to diverse technical and non-technical audiences. Generate actionable insights for business improvements. Ability to understand business requirements. Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Encourage continuous innovation and out-of-the-box thinking. Good To Have: ML Engineer: Experience researching and applying large language and Generative AI models. Experience with Langchain, LLAMA Index, and Performance Evaluation frameworks. Experience working with model registry, model deployment & monitoring tools. ML-Flow / App. Monitoring tools. Py-Spark AWS Engineer: Experience in migrating workload from on-premises to cloud and cloud to cloud migrations. Experience with Data quality frameworks.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Greater Bengaluru Area

Remote

About us: The global hiring revolution is shaping a future where talent can thrive everywhere, driving innovation and progress on a global scale. Multiplier is at the forefront of this change. By removing barriers and simplifying global hiring, we’re creating a level playing field where businesses and individuals – (like you) – can compete, grow, and succeed, regardless of geography. Multiplier empowers companies to hire, onboard, manage, and pay talent in 150+ countries, quickly and compliantly. Our mission is to build a world without limits, where ambitious businesses can look beyond borders to build their global dream teams. Our unified employment platform, complete with world-class EOR, AOR, and Global Payroll products, means it has never been easier to seize the global hiring opportunity. We’re backed by some of the best in the business, (Sequoia, DST, and Tiger Global), are led by industry-leading experts, scaling fast, and seeking brilliant like-minded enthusiasts to join our team. The future is borderless. Let’s build it together. A BIT ABOUT THE OPPORTUNITY What you'll do: Design and build from scratch, the data architecture and the data platform necessary support the requirements at Multiplier. Work closely with stakeholders and product managers to deliver all data product requirements for our external and internal customers. Understand internal data sets and sources to be able to build data lakes and warehouse to support the continuous needs. Analyse and utilise external data sets and sources to be able to answer questions and derive insights based on the business requirements. What you'll bring: At least 5 years of experience as a Data Engineer or related field. Experience with data modelling, data warehousing, and building ETL pipelines preferably on the AWS stack. Experience with big data tools such as Databricks, Redshift, Snowflake, or similar platforms Proficiency in open table formats like Apache Iceberg, Delta Lake, or Hudi Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience in working with data analytics tools such as Tableau or Quicksight. Experience with high-level scripting/programming languages: Python, JavaScript, Java etc. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Management skill is a huge plus. What we’ll provide for you: Attractive ESOPs Remote employment with truly remote culture. Ability to contribute to this business at a high level. Working with a compassionate, energetic, inspired, ambitious, and diverse team. Opportunity to grow within a fast-growth business. Competitive benefits, compensation, and culture of recognition. Equipment you need to do your job Unlimited holiday policy. Feel free to apply even if you feel unsure about whether you meet every single requirement in this posting. As long as you're a quick learner, and are excited about changing the status quo for tech recruitment, we're happy to support you as you come up to speed with our tech stack.

Posted 3 weeks ago

Apply

5.0 years

0 - 1 Lacs

Kakkanad West, Kochi, Kerala

On-site

Job Title: Data Warehousing Specialist Experience: Minimum 5 Years Location: Kochi, Kerala Job Type: Full-Time Job Summary: We are looking for a skilled Data Warehousing Specialist with at least 5 years of experience to join our growing IT team in Kochi. The ideal candidate should have strong expertise in SQL , MS SQL Server , and MongoDB , and be able to design, build, and maintain robust data warehouse solutions that support business intelligence and analytics needs. Key Responsibilities: Design, develop, and maintain data warehouse solutions Develop ETL processes and data integration pipelines Optimize database performance and ensure data integrity Collaborate with data analysts and business teams to understand data needs Work with both structured (SQL, MS SQL) and unstructured data (MongoDB) Ensure security and compliance standards for data storage and handling Requirements: Minimum 5 years of experience in data warehousing and database management Strong hands-on experience with SQL , MS SQL Server , and MongoDB Solid understanding of data modeling, schema design, and normalization Experience with ETL tools and data integration frameworks Ability to troubleshoot and optimize complex queries Good communication skills and a problem-solving mindset Preferred Qualifications: Experience with cloud-based data warehousing solutions (e.g., AWS Redshift, Snowflake) Exposure to BI tools (e.g., Power BI, Tableau) is a plus Work Location: Kochi, Kerala Employment Type: Full-Time, Hybrid Mode Job Type: Full-time Pay: ₹55,000.00 - ₹115,000.00 per month Benefits: Health insurance Paid sick time Schedule: Monday to Friday Weekend availability Supplemental Pay: Performance bonus Education: Bachelor's (Required) Experience: Data warehouse: 5 years (Required) Language: English (Required) Work Location: In person Application Deadline: 15/08/2025

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) Are you customer obsessed, flexible, smart and analytical, strategic yet execution focused and passionate about e-commerce? Are you an experienced, entrepreneurial leader with a strong work ethic? If yes, this opportunity will appeal to you. IN Consumer BI Reporting and Analytics (COBRA) team is looking for a highly driven, customer-obsessed Business Intelligence Engineer who will be responsible for building BI platform & team and supporting key decision making across the group. You’ll analyze large amounts of data, discover and solve real world problems, build metrics and business cases around key projects and, most of all, be an integral part of creating a better customer and seller experience. We are looking for customer obsessed, data driven entrepreneurs to join our growing team. Solve some of the hardest problems for our customers and Sellers. If you want operate at start up speed, solve some of the hardest problems and build a service which customers love, Amazon.in might just be the place for you. The Business Intelligence Engineer is responsible for driving deep insights about Amazon Business and driving continuous improvement using the analysis. The person should have a detailed understanding of a business requirement or the ability to quickly get to the root cause of a particular business issue, and draft solutions to meet requirements or resolve the root problems. The BIE will create pipelines for reports to analyze data, make sense of the results and be able to explain what it all means to key stakeholders. This individual will analyze large amounts of data, discover and solve real world problems and build metrics and business cases around key performance of the P3P programs. The ideal candidate will use a customer backwards approach in deriving insights and identifying actions we can take to improve the customer experience and conversion for the program. Key job responsibilities • Develop and streamline necessary dashboards and one-off analyses, providing ability to surface business-critical KPIs, monitor the health of metrics and effectively communicate performance. • Partner with stakeholders and other Business Intelligence teams to acquire necessary data for robust analysis. • Convert data into insights including implications and recommendations that are specific and actionable for the P3P team and across the business. • Partner with other analysts as well as data engineering and technology teams to support building a best-in-class dashboards and data infrastructure. • Communicate insights using data visualization and presentations to stakeholders • The successful candidate will be an expert with analyzing large data sets and have exemplary communication skills. The candidate will need to be a self-starter, very comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail. Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301

Posted 4 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074

Posted 4 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies