Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 6.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform.
Posted 1 week ago
3.0 - 6.0 years
2 - 6 Lacs
Chennai
Work from Office
AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills- Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 week ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks Location:Pune/ Mumbai/ Bangalore/ Chennai
Posted 1 week ago
8.0 - 13.0 years
1 - 4 Lacs
Pune
Work from Office
Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.
Posted 1 week ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 1 week ago
5.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 1 week ago
5.0 - 8.0 years
15 - 18 Lacs
Hyderabad, Bengaluru
Hybrid
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts: Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 1 week ago
8.0 - 13.0 years
9 - 14 Lacs
Bengaluru
Work from Office
8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
You have an entrepreneurial spirit. You enjoy working as a part of well-knit teams. You value the team over the individual. You welcome diversity at work and within the greater community. You aren't afraid to take risks. You appreciate a growth path with your leadership team that journeys how you can grow inside and outside of the organization. You thrive upon continuing education programs that your company sponsors to strengthen your skills and for you to become a thought leader ahead of the industry curve. Must Have: 3-6 years of hands-on experience in HL7 interfaces build on Epic/Cerner/Allscripts or integration engines such as Cloverleaf and HealthConnect Understand US healthcare workflows Experienced in performing configuration changes and system builds in Epic EHR (Electronic Health Record) platform Experience in Agile development methodology. Ability to perform estimation of work products. Ability to understand Service Level Agreement (SLA) methodology and follow the same as per engagement requirements. Perform problem management activities such as Root cause analysis of incidents. Excellent documentation skills such as - Application understanding, change management etc. Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have : Epic bridges certification (not mandatory) Cloverleaf or HealthConnect certification (not mandatory) Excellent documentation skills such as - Application understanding, change management etc Ability to follow engagement specific project delivery processes Proactive drive on improvement and innovation ideas
Posted 1 week ago
3.0 - 5.0 years
10 - 15 Lacs
Pune
Work from Office
Job Description: Sr. Software Engineer (Python) Company: Karini AI Location: Pune (Wakad) Experience Required: 3 - 5 years Compensation: Not Disclosed Role Overview : We are seeking a skilled Sr. Software Engineer with advanced Python skills with a passion for product development and a knowledge of Machine Learning and/or Generative AI. You will collaborate with a talented team of engineers and AI Engineers to design and develop high-quality Generative AI platform on AWS. Key Responsibilities : Designed and developed backend applications and APIs using Python. Work on product development, building robust, scalable, and maintainable solutions. Integrate Generative AI models into production environments to solve real-world problems. Collaborate with cross-functional teams, including data scientists, product managers, and designers, to understand requirements and deliver solutions. Optimize application performance and ensure scalability across cloud environments. Write clean, maintainable, and efficient code while adhering to best practices. Requirements : 3-5 years of hands-on experience in Product development. Demonstrable experience in understanding advanced python concepts for building scalable systems. Demonstrable experience working with FastAPI server in production environment. Familiarity with unit testing, version control and CI/CD Good understanding of Machine Learning concepts and frameworks (e.g., TensorFlow, PyTorch). Experience with integrating and deploying ML models into applications is a plus. Knowledge of database systems (SQL/NoSQL) and RESTful API development. Exposure to containerization (Docker) and cloud platforms (AWS). Strong problem-solving skills and attention to detail. Preferred Qualifications : Bachelor of Engineering in Computer Science, Information Technology, or any other engineering discipline. M.Tech, M.E. & B.E-Computer Science preferred. Hands-on experience in product-focused organizations. Experience working with data pipelines or data engineering tasks. Knowledge of CI/CD pipelines and DevOps practices. Familiarity with version control tools like Git. Interest or experience in Generative AI or NLP applications. What We Offer: Top-tier compensation package, aligned with industry benchmarks. Comprehensive employee benefits including Provident Fund (PF) and medical insurance. Experience working with Ex-AWS founding team with the fastest growing company. Work on innovative AI-driven products that solve complex problems. Collaborate with a talented and passionate team in a dynamic environment. Opportunities for professional growth and skill enhancement in Generative AI A supportive, inclusive, and flexible work culture that values creativity and ownership.
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 1 week ago
5.0 - 8.0 years
15 - 27 Lacs
Hyderabad
Work from Office
Dear Candidate, We are pleased to invite you to participate in the EY GDS face to face hiring Event for the position of AWS Data Engineer. Role: AWS Data Engineer Experience Required: 5-8 Years Location - Hyderabad Mode of interview - Face to Face JD - Technical Skills: • Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark • Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL • Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live • Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. • Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. • Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. • Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Kindly confirm your availability by applying to this Job
Posted 1 week ago
12.0 - 16.0 years
40 - 45 Lacs
Gurugram
Work from Office
Overview Enterprise Data Operations Assoc Manager Job Overview: As Data Modelling Assoc Manager, you will be the key technical expert overseeing data modeling and drive a strong vision for how data modelling can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data modelers who create data models for deploying in Data Foundation layer and ingesting data from various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data modelling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will independently be analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be a key technical expert performing all aspects of Data Modelling working closely with Data Governance, Data Engineering and Data Architects teams. You will provide technical guidance to junior members of the team as and when needed. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Responsibilities: Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Advocates existing Enterprise Data Design standards; assists in establishing and documenting new standards. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the data science team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Qualifications: 12+ years of overall technology experience that includes at least 6+ years of data modelling and systems architecture. 6+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 6+ years of experience developing enterprise data models. 6+ years in cloud data engineering experience in at least one cloud (Azure, AWS, GCP). 6+ years of experience with building solutions in the retail or in the supply chain space. Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models). Fluent with Azure cloud services. Azure Certification is a plus. Experience scaling and managing a team of 5+ data modelers Experience with integration of multi cloud services with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring, hiring and scaling data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Differentiating Competencies Required Ability to work with virtual teams (remote work locations); lead team of technical resources (employees and contractors) based in multiple locations across geographies Lead technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with business, understand sometimes ambiguous, needs, and translate to clear, aligned requirements Able to work independently with business partners to understand requirements quickly, perform analysis and lead the design review sessions. Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.
Posted 1 week ago
7.0 - 11.0 years
10 - 14 Lacs
Chennai
Work from Office
What you'll do As a Software Engineer, you will work with a world class team developing and deploying new technologies on a cutting edge network. You will design, develop and deploy new and innovative technology into a service provider network. Viasats unique position as a service provider and equipment manufacturer allows you to experience the whole life cycle of software development all the way from design to deployment. The day-to-day You will be a member of the software team that is involved in the embedded software development . It interacts with different network elements both on Access Network towards adapting with L2 Subsystem, CSN Network towards adapting with service network components. Our team members enjoy working closely with each other utilizing an agile development methodology. Priorities can change quickly, but our team members are able to stay ahead of deadlines to delight every one of our customers whether they are internal or external to Via sat. We are searching for candidates who enjoy working with people and have a technical mind that excels when being challenged What you'll need 7 to 11 years of software engineering experience in Java with strong emphasis on software architecture and design in the Unix/Linux based platforms. Experience with network programming and concurrent/multithreaded programming. Experience building CI/CD pipeline and automated software deployments. Experience working in cloud environment AWS EMR. Familiarity with Hadoop and data processing technologies such as Kafka is advantageous. Problem-solving experience and possess a DevOps approach Strong oral and written communication skills. Bachelors degree in Computer Science, Electrical Engineering, or related Engineering Disciplines. Up to 10% of travel. What will help you on the job Knowledge on tools like Jenkins, JIRA, and Git. Experience with bash, ansible and Python scripting in Linux Experience with telecom/networking/satellite/wireless communications.
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 08 One of the most valuable asset in today's Financial industry is the data which can provide businesses the intelligence essential to making business and financial decisions with conviction. This role will provide an opportunity to you to work on Ratings and Research related data. You will get an opportunity to work on cutting edge big data technologies and will be responsible for development of both Data feeds as well as API work. Location: Hyderabad The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. Impact: As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. Whats in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 3--6 years of experience in application development. Minimum of 2 years of hands-on experience with Scala. Minimum of 2 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance.
Posted 1 week ago
5.0 - 10.0 years
12 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Summary: AWS Developer (Offshore Role) Role Overview: Responsible for migrating and transforming data pipelines from legacy Cloudera/Hadoop systems to AWS-native solutions, using tools like PySpark, MWAA, and EMR. Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus. Mandatory: AWS Developer experience.
Posted 1 week ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1581_JOB Date Opened 25/11/2022 Industry Technology Job Type Work Experience 8-12 years Job Title Senior Specialist- Data Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Location:Pune/ Mumbai/ Bangalore/ Chennai Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
7.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2162_JOB Date Opened 15/03/2024 Industry Technology Job Type Work Experience 7-9 years Job Title Sr Data Engineer City Bangalore Province Karnataka Country India Postal Code 560004 Number of Positions 5 Mandatory Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
1 - 4 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1594_JOB Date Opened 29/11/2022 Industry Technology Job Type Work Experience 6-10 years Job Title AWS GLUE Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_1668_JOB Date Opened 19/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Sr. AWS Developer City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 4 AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Must have skills required : SQL, Python, Healthcare Data, Claims data, Emr, healthcare data standards, large relational databases Good to have skills :RWD, RWE, HEOR We are looking for a highly motivated real-world evidence (RWE) data scientist who has experience in generating insights/evidence from claims and EHR real world data (RWD) to join our growing Bangalore-based RWE analytics team at Clarivate. About You experience, education, skills, and accomplishments Graduate degree in Data science/analytics, Epidemiology, Biostatistics, or related quantitative field At least 3 years experience in a consultative, client-facing role At least 3 years experience using SQL, Python, programming against large relational databases leveraging interoperable-linked, patient-level data at scale Healthcare data expert across various data types (e.g. open/closed claims, inpatient/ambulatory EMR, commercial labs, social determinants, etc.) and codified healthcare data standards (e.g. ICD, CPT, HCPCS, LOINC, Snomed, etc.) It would be great if you also had . . Experience evaluating fit-for-purpose data and implementing research protocols Experienced applying RWD to specific healthcare and life sciences-related research questions and use cases, such as RWE/epidemiology, HEOR, R&D, commercial, public health What will you be doing in this role? Efficiently query multiple data types (medical and pharmacy claims, EMR, lab, charge master) using SQL and Python to identify actionable insights for clients Empower clients to generate RWE utilizing best-in-class observational research by conducting pre-sale feasibility analyses of varying breadth and depth Consult with clients to identify business problems and generate analytics-based solutions Develop and communicate technical, operational, and business specifications to junior analysts and engagement leads Work cross-functionally to support operational processes to deliver data analytics projects on time and with accuracy Contribute to the development and maintenance of internal documentation, code templates, analytics automation, and other process improvement initiatives to support internal team efficiency, effectiveness, and growth. About the Team We are a highly motivated team of 20+ analytics, biostatistics, epidemiology, and data science professionals distributed across three countries, working together to provide analytics and insights using Clarivates RWD product for pharmaceutical, biopharma, and Med Tech clients. Hours of Work You will be expected to work on a work schedule (12: 00 PM IST to 9:00 PM IST) to provide for reasonable hours of collaborative work with the US team and there could be a slight extension on an as-needed basis.
Posted 1 week ago
6.0 - 9.0 years
8 - 11 Lacs
Mumbai, Hyderabad, Chennai
Work from Office
About the Role: Grade Level (for internal use): 10 S&P Dow Jones Indices The Role S&P Dow Jones Indices a global leader in providing investable and benchmark indices to the financial markets, is looking for a Java Application Developer to join our technology team. The Location Mumbai/Hyderabad/Chennai The Team You will be part of global technology team comprising of Dev, QA and BA teams and will be responsible for analysis, design, development and testing. The Impact You will be working on one of the core technology platforms responsible for the end of day calculation as well as dissemination of index values. Whats in it for you You will have the opportunity to work on the enhancements to the existing index calculation system as well as implement new methodologies as required. Responsibilities Design and development of Java applications for SPDJI web sites and its feeder systems. Participate in multiple software development processes including Coding, Testing, De-bugging & Documentation. Develop software applications based on clear business specifications. Work on new initiatives and support existing Index applications. Perform Application & System Performance tuning and troubleshoot performance issues. Develop web based applications and build rich front-end user interfaces. Build applications with object oriented concepts and apply design patterns. Integrate in-house applications with various vendor software platforms. Setup development environment / sandbox for application development. Check-in application code changes into the source repository. Perform unit testing of application code and fix errors. Interface with databases to extract information and build reports. Effectively interact with customers, business users and IT staff. What were looking for Basic Qualification Bachelors degree in Computer Science, Information Systems or Engineering is required, or in lieu, a demonstrated equivalence in work experience. (6 to 9) years of IT experience in application development and support. Strong Experience with Java, J2EE, JMS &.EJBs Advanced SQL & basic PL/SQL programming Basic networking knowledge / Unix scripting Exposure to UI technologies like react JS Basic understanding of AWS cloud (EC2, EMR, Lambda, S3, Glue, etc.) Excellent communication and interpersonal skills are essential, with strong verbal and writing proficiencies. Preferred Qualification Experience working with large datasets in Equity, Commodities, Forex, Futures and Options asset classes. Experience with Index/Benchmarks or Asset Management or Trading platforms. Basic Knowledge of User Interface design & development using JQuery, HTML5 & CSS.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2