Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Python (Programming Language) Good to have skills : AWS Redshift, AWS Architecture, AWS Lambda Administration, TerraformMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive successful outcomes. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously assess and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with AWS Lambda Administration, AWS Redshift, AWS Architecture.- Strong understanding of application design principles and software development life cycle.- Experience with version control systems such as Git.- Familiarity with Agile methodologies and project management tools. Additional Information:- The candidate should have minimum 5 years of experience in Python (Programming Language).- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 days ago
2.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), Google BigQuery, Google Cloud Platform ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. A typical day involves creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL), Google BigQuery, Google Cloud Platform Architecture.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and data lake architectures.- Familiarity with data integration tools and ETL frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 days ago
3.0 - 8.0 years
4 - 8 Lacs
Noida
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain data pipelines.- Ensure data quality throughout the data lifecycle.- Implement ETL processes for data migration and deployment.- Collaborate with cross-functional teams to understand data requirements.- Optimize data storage and retrieval processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google BigQuery.- Strong understanding of data engineering principles.- Experience with cloud-based data services.- Knowledge of SQL and database management systems.- Hands-on experience with data modeling and schema design. Additional Information:- The candidate should have a minimum of 3 years of experience in Google BigQuery.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 days ago
4.0 - 9.0 years
9 - 19 Lacs
Noida, Hyderabad, Pune
Work from Office
Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential Responsibilities: Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager Build and maintain ETL/ELT pipelines for both batch and streaming data. Work with structured and unstructured datasets at scale. Apply Data Modeling principles and advanced SQL techniques. Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. Collaborate with product teams to understand requirements and deliver optimized data solutions. Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. Work independently with minimal supervision and strong ownership of deliverables. Must Have: 4+ years of experience in Data Engineering on AWS Cloud. Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer Professional (mandatory) Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimizatio
Posted 4 days ago
15.0 - 20.0 years
45 - 55 Lacs
Pune, Bengaluru
Work from Office
Job Description for a Senior Solution Architect – Data & Cloud Job Title: Senior Solution Architect – Data & Cloud Experience: 12+ Years Location: Hybrid / Remote Practice: Migration Works Employment Type: Full-time About Company: We are a data and analytics firm that provides the strategies, tools, capability and capacity that businesses need to turn their data into a competitive advantage. USEReady partners with cloud and data ecosystem leaders like Tableau, Salesforce, Snowflake, Starburst and Amazon Web Services, and has been named Tableau partner of the year multiple times. Headquartered in NYC, the company has 450 employees across offices in the U.S., Canada, India and Singapore and specializes in financial services. USEReady’s deep analytics expertise, unique player/coach approach and focus on fast results makes the company a perfect partner for a cloud-first, digital world. About the Role: We are looking for a highly experienced Senior Solution Architect to join our Migration Works practice, specializing in modern data platforms and visualization tools. The ideal candidate will bring deep technical expertise in Tableau, Power BI, AWS, and Snowflake, along with strong client-facing skills and the ability to design scalable, high-impact data solutions. You will be at the forefront of driving our AI driven migration and modernization initiatives, working closely with customers to understand their business needs and guiding delivery teams to success. Key Responsibilities: Solution Design & Architecture Lead the end-to-end design of cloud-native data architecture using AWS, Snowflake, and Azure stack. Translate complex business requirements into scalable and efficient technical solutions. Architect modernization strategies for legacy BI systems to cloud-native platforms. Client Engagement Conduct technical discussions with enterprise clients and stakeholders to assess needs and define roadmap. Act as a trusted advisor during pre-sales and delivery phases, showcasing technical leadership and consultative approach. Migration & Modernization Design frameworks for data platform migration (from on-premise to cloud), data warehousing, and analytics transformation. Support estimation, planning, and scoping of migration projects. Team Leadership & Delivery Oversight Guide and mentor delivery teams across geographies, ensuring solution quality and alignment to client goals. Support delivery by providing architectural oversight and resolving design bottlenecks. Conduct technical reviews, define best practices, and uplift the team’s capabilities. Required Skills & Experience: 15+ years of progressive experience in data and analytics, with at least 5 years in solution architecture roles. Strong hands-on expertise in: Tableau And Power BI – dashboard design, visualization architecture, and migration from legacy BI tools. AWS – S3, Redshift, Glue, Lambda, and data pipeline components. Snowflake – Architecture, Snowconvert, data modeling, security, and performance optimization. Experience in migrating legacy platforms (e.g., Cognos, BO, Qlik) to modern BI/Cloud-native stacks like Tableau and Power BI. Proven ability to interface with senior client stakeholders, understand business problems, and propose architectural solutions. Strong leadership, communication, and mentoring skills. Familiarity with data governance, security, and compliance in cloud environments. Preferred Qualifications: AWS/Snowflake certifications are a strong plus. Exposure to data catalog, lineage tools, and metadata management. Knowledge of ETL/ELT tools such as Talend, Informatica, or dbt. Prior experience working in consulting or fast-paced client services environments. What We Offer: Opportunity to work on cutting-edge AI led cloud and data migration projects. A collaborative and high-growth environment with room to shape future strategy. Access to learning programs, certifications, and technical leadership exposure.
Posted 4 days ago
0.0 years
0 - 0 Lacs
Gurugram
Work from Office
About the Team: Join a highly skilled and collaborative team dedicated to ensuring data reliability, performance, and security across our organization’s critical systems. We work closely with developers, architects, and DevOps professionals to deliver seamless and scalable database solutions in a cloud-first environment, leveraging the latest in AWS and open-source technologies. Our team values continuous learning, innovation, and the proactive resolution of database challenges. About the Role: As a Database Administrator specializing in MySQL and Postgres within AWS environments, you will play a key role in architecting, deploying, and supporting the backbone of our data infrastructure. You’ll leverage your expertise to optimize database instances, manage large-scale deployments, and ensure our databases are secure, highly available, and resilient. This is an opportunity to collaborate across teams, stay ahead with emerging technologies, and contribute directly to our business success. Responsibilities: Design, implement, and maintain MySQL and Postgres database instances on AWS, including managing clustering and replication (MongoDB, Postgres solutions). Write, review, and optimize stored procedures, triggers, functions, and scripts for automated database management. Continuously tune, index, and scale database systems to maximize performance and handle rapid growth. Monitor database operations to ensure high availability, robust security, and optimal performance. Develop, execute, and test backup and disaster recovery strategies in line with company policies. Collaborate with development teams to design efficient and effective database schemas aligned with application needs. Troubleshoot and resolve database issues, implementing corrective actions to restore service and prevent recurrence. Enforce and evolve database security best practices, including access controls and compliance measures. Stay updated on new database technologies, AWS advancements, and industry best practices. Plan and perform database migrations across AWS regions or instances. Manage clustering, replication, installation, and sharding for MongoDB, Postgres, and related technologies. Requirements: 4-7 Years of Experinece in Database Management Systems as a Database Engineer. Proven experience as a MySQL/Postgres Database Administrator in high-availability, production environments. Expertise in AWS cloud services, especially EC2, RDS, Aurora, DynamoDB, S3, and Redshift. In-depth knowledge of DR (Disaster Recovery) setups, including active-active and active-passive master configurations. Hands-on experience with MySQL partitioning and AWS Redshift. Strong understanding of database architectures, replication, clustering, and backup strategies (including Postgres replication & backup). Advanced proficiency in optimizing and troubleshooting SQL queries; adept with performance tuning and monitoring tools. Familiarity with scripting languages such as Bash or Python for automation/maintenance. Experience with MongoDB, Postgres clustering, Cassandra, and related NoSQL or distributed database solutions. Ability to provide 24/7 support and participate in on-call rotation schedules. Excellent problem-solving, communication, and collaboration skills. What we offer? A positive, get-things-done workplace A dynamic, constantly evolving space (change is par for the course – important you are comfortable with this) An inclusive environment that ensures we listen to a diverse range of voices when making decisions. Ability to learn cutting edge concepts and innovation in an agile start-up environment with a global scale Access to 5000+ training courses accessible anytime/anywhere to support your growth and development (Corporate with top learning partners like Harvard, Coursera, Udacity) About us: At PayU, we are a global fintech investor and our vision is to build a world without financial borders where everyone can prosper. We give people in high growth markets the financial services and products they need to thrive. Our expertise in 18+ high-growth markets enables us to extend the reach of financial services. This drives everything we do, from investing in technology entrepreneurs to offering credit to underserved individuals, to helping merchants buy, sell, and operate online. Being part of Prosus, one of the largest technology investors in the world, gives us the presence and expertise to make a real impact. Find out more at www.payu.com Our Commitment to Building A Diverse and Inclusive Workforce As a global and multi-cultural organization with varied ethnicities thriving across locations, we realize that our responsibility towards fulfilling the D&I commitment is huge. Therefore, we continuously strive to create a diverse, inclusive, and safe environment, for all our people, communities, and customers. Our leaders are committed to create an inclusive work culture which enables transparency, flexibility, and unbiased attention to every PayUneer so they can succeed, irrespective of gender, color, or personal faith. An environment where every person feels they belong, that they are listened to, and where they are empowered to speak up. At PayU we have zero tolerance towards any form of prejudice whether a specific race, ethnicity, or of persons with disabilities, or the LGBTQ communities.
Posted 1 week ago
8.0 - 12.0 years
27 - 42 Lacs
Hyderabad
Work from Office
Job Summary Strong knowledge of AWS services including S3 AWS DMS (Database Migration Service) and AWS Redshift Serverless. Experience in setting up and managing data pipelines using AWS DMS. Proficiency in creating and managing data storage solutions using AWS S3. Proficiency in working with relational databases particularly PostgreSQL Microsoft SQL Server Oracle Experience in setting up and managing data warehouses particularly AWS Redshift Serverless. Responsibilities Analytical and Problem-Solving Skills Ability to analyze and interpret complex data sets. Experience in identifying and resolving data integration issues such as inconsistencies or discrepancies. Strong problem-solving skills to troubleshoot and resolve data integration and migration issues. Soft Skills Ability to work collaboratively with database administrators and other stakeholders to ensure integration solutions meet business requirements. Strong communication skills to document data integration processes including data source definitions data flow diagrams and system interactions. Ability to participate in design reviews and provide input on data integration plans. Willingness to stay updated with the latest data integration tools and technologies and recommend upgrades when necessary. Security and Compliance Knowledge of data security and privacy regulations. Experience in ensuring adherence to data security and privacy standards during data integration processes. Certifications Required AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database - Specialty are a plus
Posted 1 week ago
5.0 - 9.0 years
9 - 13 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Big Data Tester LocationPune (for Mastercard) Experience Level5-9 years Minimum Skill Set Required / Must Have Python PySpark Testing skills and best practices for data validation SQL (hands-on experience, especially with complex queries) and ETL Good to Have Unix Big Data: Hadoop, Spark, Kafka, NoSQL databases (MongoDB, Cassandra), Hive, etc. Data Warehouse: TraditionalOracle, Teradata, SQL Server Modern CloudAmazon Redshift, Google BigQuery, Snowflake AWS development experience (not mandatory, but beneficial) Best Fit Python + PySpark + Testing + SQL (hands-on) and ETL + Good to Have skills
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
ECMS Req # 533599 Number of Openings 1 Duration of Hiring 6 months years of experience Total 6 - 8 years Relevant 4 - 5 yrs Detailed job description - Skill Set: Required Qualifications: 4+ years of experience in data engineering or warehousing with a focus on Amazon Redshift . Strong proficiency in SQL , with ability to write and optimize complex queries for large datasets. Solid understanding of dimensional modeling , Star Schema , and OLAP vs OLTP data structures. Experience in designing analytical data marts and transforming raw/transactional data into structured analytical formats. Hands-on experience with ETL tools (e. g. , AWS Glue). Familiarity with Amazon Redshift Spectrum , RA3 nodes , and data distribution/sort keys best practices. Comfortable working in cloud-native environments, particularly AWS (S3, Lambda, CloudWatch, IAM, etc. ). Preferred Qualifications: Exposure to data lake integration , external tables , and Redshift-Unload/Copy operations. Experience in BI tools (e. g. , Tableau, QuickSight) to validate and test data integration. Familiarity with Python or PySpark for data transformation scripting. Understanding of CI/CD for data pipelines and version control using Git. Knowledge of data security, encryption, and compliance in a cloud environment. Mandatory Skills(ONLY 2 or 3) Amazon Redshift SQL Vendor Billing range in local currency (per day) INR 8500/day Work Location Any Infosys DC WFO/WFH/Hybrid Hybrid Joining time ( Notice period) As early as possible Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) YES/ NO No BGCHECK before or After onboarding Before - Final BG report
Posted 1 week ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Requirements Master Of Comp. Applications,Master Of Engineering,Master Of Science,Master Of Technology,Bachelor Of Comp. Applications,Bachelor Of Science,Bachelor of Engineering,Bachelor Of Technology Service Line Engineering Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to lead the engagement effort of providing high-quality and value-adding consulting solutions to customers at different stages- from problem definition to diagnosis to solution design, development and deployment. You will review the proposals prepared by consultants, provide guidance, and analyze the solutions defined for the client business problems to identify any potential risks and issues. You will identify change Management requirements and propose a structured approach to client for managing the change using multiple communication mechanisms. You will also coach and create a vision for the team, provide subject matter training for your focus areas, motivate and inspire team members through effective and timely feedback and recognition for high performance. You would be a key contributor in unit-level and organizational initiatives with an objective of providing high-quality, value-adding consulting solutions to customers adhering to the guidelines and processes of the organization. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional Requirements: Technology-Cloud Platform-AWS Database-AWS,Technology-Container Platform-Docker,Technology-Container Platform-Kubernetes Preferred Skills: Technology-Cloud Platform-AWS Database Technology-Container Platform-Docker Technology-Cloud Platform-Power Platform
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc,Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re- write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 week ago
14.0 - 19.0 years
7 - 12 Lacs
Noida
Work from Office
We are looking for a Senior Manager-ML Ops to join our Technology team at Clarivate. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centered design. About You (Skills & Experience Required) Bachelors or masters degree in computer science, Engineering, or a related field. Overall 14+ years of experience including DevOps, machine learning operations and data engineering domain Proven experience in managing and leading technical teams. Strong understanding of MLOps practices, tools, and frameworks. Proficiency in data pipelines, data cleaning, and feature engineering is essential for preparing data for model training. Knowledge of programming languages (Python, R), and version control systems (Git) is necessary for building and maintaining MLOps pipelines. Experience with MLOps-specific tools and platforms (e.g., Kubeflow, MLflow, Airflow) can streamline MLOps workflows. DevOps principles, including CI/CD pipelines, infrastructure as code (IaC), and monitoring is helpful for automating ML workflows. Familiarity with cloud platforms (AWS, GCP, Azure) and their associated services (e.g., compute, storage, ML platforms) is essential for deploying and scaling ML models. Familiarity with container orchestration tools like Kubernetes can help manage and scale ML workloads efficiently. It would be great if you also had, Experience with big data technologies (Hadoop, Spark). Knowledge of data governance and security practices. Familiarity with DevOps practices and tools. What will you be doing in this role? Data Science Model Deployment & Monitoring : Oversee the deployment of machine learning models into production environments. Ensure continuous monitoring and performance tuning of deployed models. Implement robust CI/CD pipelines for model updates and rollbacks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate project status, risks, and opportunities to stakeholders. Provide technical guidance and support to team members. Infrastructure & Automation : Design and manage scalable infrastructure for model training and deployment. Automate repetitive tasks to improve efficiency and reduce errors. Ensure the infrastructure meets security and compliance standards. Innovation & Improvement : Stay updated with the latest trends and technologies in MLOps. Identify opportunities for process improvements and implement them. Drive innovation within the team to enhance the MLOps capabilities.
Posted 1 week ago
2.0 - 4.0 years
3 - 7 Lacs
Bharatpur
Work from Office
Who are you 2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYoure comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools.
Posted 1 week ago
2.0 - 4.0 years
3 - 7 Lacs
Nellore
Work from Office
Full Time Role at EssentiallySports for Data Growth EngineerEssentiallySports is the home for the underserved fan, delivering storytelling that goes beyond the headlines. As a media platform, we combine deep audience insights with cultural trends, to meet fandom where it lives and where it goes next. ValuesFocus on the user and all else will followHire for Intent and not for ExperienceBootstrapping gives you the freedom to serve the customer and the team instead of investorInternet and Technology untap the nichesAction oriented, integrity, freedom, strong communicators, and responsibilityAll things equal, one with high agency winsEssentiallySports is a top 10 sports media platform in the U. S. , generating over a billion pageviews a year and 30m+ monthly active users per month. This massive traffic fuels our data-driven culture, allowing us to build owned audiences at scale through organic growth—a model we take pride in, with zero CAC. The next phase of ES growth is around newsletter initiative, in less than 9 months, we’ve built a robust newsletter brand with 700,000+ highly engaged readers and impressive performance metrics:5 newsletter brands700k+ subscribersOpen rates of 40%-46%. The role is for a data engineer with growth and business acumen, in the “permissionless growth” team. Someone who can connect the pipelines of millions of users, but at the same time knit a story of the how and why. ResponsibilitiesOwning Data Pipeline from Web to Athena to Email, end-to-endYou’ll make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week dayPersonalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you?2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters? Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYou’re comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goalsProblem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguityExcited to join a lean team in a big company that moves quickly
Posted 1 week ago
8.0 - 13.0 years
18 - 22 Lacs
Mumbai, Chennai, Bengaluru
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role In this role you will play a key role in Data Strategy - We are looking for a 8+ years experience in Data Strategy (Tech Architects, Senior BAs) who will support our product, sales, leadership teams by creating data-strategy roadmaps. The ideal candidate is adept at understanding the as-is enterprise data models to help Data-Scientists/ Data Analysts to provide actionable insights to the leadership. They must have strong experience in understanding data, using a variety of data tools. They must have a proven ability to understand current data pipeline and ensure minimal cost-based solution architecture is created & must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Identify, design, and recommend internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. & identify data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to create frameworks for digital twins/ digital threads having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders & coordinating with cross functional team ; aka SPOC for global master data Your Profile 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools - Experience with understanding big data toolsHadoop, Spark, Kafka, etc. & experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB & experience with understanding data pipeline and workflow management toolsLuigi, Airflow, etc. 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Postgres/ SQL/ Mongo & 2+ years working knowledge in Data StrategyData Governance/ MDM etc. Having 5+ years of experience in creating data strategy frameworks/ roadmaps, in Analytics and data maturity evaluation based on current AS-is vs to-be framework and in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO's with 2+ years working knowledge in Data StrategyData Governance/ MDM etc. & 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive, cognitive analytics What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Location - Bengaluru,Mumbai,Chennai,Pune,Hyderabad,Noida
Posted 1 week ago
10.0 - 14.0 years
9 - 13 Lacs
Navi Mumbai
Work from Office
Skill required: Supply Chain - SAP SCM APO Demand Planning Designation: Business Advisory Associate Manager Qualifications: Any Graduation Years of Experience: 10 to 14 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Manage planning, procurement, distribution and aftermarket service supply chain operations, helping clients realize $5 for every $1 spent on our services. A solution used to support Forecast market demand for company`s products and produce a consensus demand plan either in volume or value or both. Make use of automated planning and management by exception.You will be a part of Supply Chain Management team where in you will be accountable to manage the Supply Chain and provide insights which will help in increasing efficiency by doing away with waste and facilitating greater profits. It also involves management of the flow of goods and services and including all processes that transform raw materials into final products.Bachelor s degree in Supply Chain, Business, Statistics, Engineering, or a related field.7-10 years of experience in demand planning, forecasting, or supply chain analytics with thorough process understanding.Strong analytical skills with proficiency in Excel, and experience with forecasting tools (e.g., Kinaxis, SAP IBP, APO, Oracle Demantra, or similar).Familiarity with ERP systems (e.g., Kinaxis, SAP) and data visualization tools (e.g., Power BI, Tableau) is a plus.Excellent communication and collaboration skills to work with cross-functional teams.High attention to detail and a proactive problem-solving mindset.Ability to manage multiple priorities in a fast-paced FMCG environment. What are we looking for Analyze historical sales data, market trends, and promotional activity to generate accurate demand forecasts.Collaborate with Sales and Marketing teams to incorporate business intelligence into forecasting models.Maintain and improve statistical forecasting tools and systems.Monitor forecast accuracy and identify root causes for deviations.Drive continuous improvement in demand planning processes and tools.Participate in monthly S&OP (Sales and Operations Planning) meetings to align demand with supply.Coordinate with supply planners to ensure inventory availability and service level targets are met.Track and report key performance indicators (KPIs) such as forecast accuracy, bias, and inventory turnover.Support new product launches and phase-outs with demand planning inputs.Identify and mitigate risks related to demand variability and supply constraints.Work with IT and data teams to enhance data quality and reporting automation. Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems Typically creates new solutions, leveraging and, where needed, adapting existing methods and procedures The person requires understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor or team leads Generally interacts with peers and/or management levels at a client and/or within Accenture The person should require minimal guidance when determining methods and procedures on new assignments Decisions often impact the team in which they reside and occasionally impact other teams Individual would manage medium-small sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 1 week ago
7.0 - 11.0 years
8 - 13 Lacs
Navi Mumbai
Work from Office
Skill required: Supply Chain - SAP SCM APO Demand Planning Designation: Business Advisory Specialist Qualifications: Any Graduation Years of Experience: 7 to 11 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Manage planning, procurement, distribution and aftermarket service supply chain operations, helping clients realize $5 for every $1 spent on our services. A solution used to support Forecast market demand for company`s products and produce a consensus demand plan either in volume or value or both. Make use of automated planning and management by exception.You will be a part of Supply Chain Management team where in you will be accountable to manage the Supply Chain and provide insights which will help in increasing efficiency by doing away with waste and facilitating greater profits. It also involves management of the flow of goods and services and including all processes that transform raw materials into final products.Bachelor s degree in Supply Chain, Business, Statistics, Engineering, or a related field.7-10 years of experience in demand planning, forecasting, or supply chain analytics with thorough process understanding.Strong analytical skills with proficiency in Excel, and experience with forecasting tools (e.g., Kinaxis, SAP IBP, APO, Oracle Demantra, or similar).Familiarity with ERP systems (e.g., Kinaxis, SAP) and data visualization tools (e.g., Power BI, Tableau) is a plus.Excellent communication and collaboration skills to work with cross-functional teams.High attention to detail and a proactive problem-solving mindset.Ability to manage multiple priorities in a fast-paced FMCG environment. What are we looking for Analyze historical sales data, market trends, and promotional activity to generate accurate demand forecasts.Collaborate with Sales and Marketing teams to incorporate business intelligence into forecasting models.Maintain and improve statistical forecasting tools and systems.Monitor forecast accuracy and identify root causes for deviations.Drive continuous improvement in demand planning processes and tools.Participate in monthly S&OP (Sales and Operations Planning) meetings to align demand with supply.Coordinate with supply planners to ensure inventory availability and service level targets are met.Track and report key performance indicators (KPIs) such as forecast accuracy, bias, and inventory turnover.Support new product launches and phase-outs with demand planning inputs.Identify and mitigate risks related to demand variability and supply constraints.Work with IT and data teams to enhance data quality and reporting automation. Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems May create new solutions, leveraging and, where needed, adapting existing methods and procedures The person would require understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor May interact with peers and/or management levels at a client and/or within Accenture Guidance would be provided when determining methods and procedures on new assignments Decisions made by you will often impact the team in which they reside Individual would manage small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 1 week ago
15.0 - 20.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Talend ETL Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement in application design and functionality. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor application performance and implement necessary enhancements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Good To Have Skills: Experience with data integration tools and methodologies.- Strong understanding of data warehousing concepts and practices.- Experience in developing and maintaining ETL processes.- Familiarity with database management systems and SQL. Additional Information:- The candidate should have minimum 5 years of experience in Talend ETL.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with Sanjeevan Natarajan - 94866 21923 sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 1 week ago
3.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Talend ETL Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Your role will require you to stay updated on industry trends and best practices to enhance application performance and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate communication between technical teams and stakeholders to ensure alignment on project goals.- Mentor junior team members, providing them with guidance and support in their professional development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data integration processes and methodologies.- Experience with data warehousing concepts and practices.- Familiarity with SQL and database management systems.- Ability to troubleshoot and resolve technical issues related to application performance. Additional Information:- The candidate should have minimum 3 years of experience in Talend ETL.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Redshift Good to have skills : PySparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Coordinate with stakeholders to gather requirements- Ensure timely delivery of projects Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Good To Have Skills: Experience with PySpark- Strong understanding of ETL processes- Experience in data transformation and integration- Knowledge of cloud computing platforms- Ability to troubleshoot and resolve technical issues Additional Information:- The candidate should have a minimum of 5 years of experience in AWS Glue- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Analyze and troubleshoot data-related issues to ensure optimal performance of data solutions.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling concepts and database design.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure for data storage and processing.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
2.0 - 4.0 years
3 - 7 Lacs
Ballari
Work from Office
Responsibilities - Owning Data Pipeline from Web to Athena to Email, end-to-end Youll make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week day. Personalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you? 2+ years of professional data engineering experience Someone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch points You have strong experience in data analysis, growth marketing, or audience development (media or newsletters? Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them up Youre comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goals Problem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguity Excited to join a lean team in a big company that moves quickly
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough