Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Description Data Engineer/Lead Required Minimum Qualifications Bachelors degree in computer science, CIS, or related field 5-10 years of IT experience in software engineering or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) Primary Skills : PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow Experience in GCP Cloud Composer, Big Query, DataProc Offer system support as part of a support rotation with other team members. Operationalize open-source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver.
Posted 5 days ago
5.0 - 8.0 years
10 - 20 Lacs
Pune, Chennai, Mumbai (All Areas)
Hybrid
Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 6 to Maximum 9 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Gurgaon / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer#hadoop#spark #python #hive #pysaprk
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD
Posted 1 week ago
4.0 - 9.0 years
17 - 27 Lacs
Chennai, Bengaluru
Work from Office
Role & responsibilities • Experience with big data technologies (Hadoop, Spark, Hive) • Proven experience as a development data engineer or similar role, with ETL background. • Experience with data integration / ETL best practices and data quality principles. • Play a crucial role in ensuring the quality and reliability of the data by designing, implementing, and executing comprehensive testing. • By going over the User Stories build the comprehensive code base and business rules for testing and validation of the data. • Knowledge of continuous integration and continuous deployment (CI/CD) pipelines. • Familiarity with Agile/Scrum development methodologies. • Excellent analytical and problem-solving skills. • Strong communication and collaboration skills.
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Gurugram, Chennai
Hybrid
We are looking for energetic, high-performing and highly skilled Java + Big Data Engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Enterprise Personalization portfolio focused on delivering the next generation global marketing capabilities. This team is responsible for building products that power Merchant Offers personalization for Amex card members. Job Description: - Demonstrated leadership in designing sustainable software products, setting development standards, automated code review process, continuous build and rigorous testing etc - Ability to effectively lead and communicate across 3rd parties, technical and business product managers on solution design - Primary focus is spent writing code, API specs, conducting code reviews & testing in ongoing sprints or doing proof of concepts/automation tools - Applies visualization and other techniques to fast track concepts - Functions as a core member of an Agile team driving User story analysis & elaboration, design and development of software applications, testing & builds automation tools - Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority - Identifies opportunities to adopt innovative technologies Qualification: - Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience - 7+ years of software development experience - 3-5 years of experience leading teams of engineers - Demonstrated experience with Agile or other rapid application development methods - Demonstrated experience with object-oriented design and coding - Demonstrated experience on these core technical skills (Mandatory) - Core Java, Spring Framework, Java EE - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark - Relational Database (PostGreS / MySQL / DB2 etc) - Data Serialization techniques (Avro) - Cloud development (Micro-services) - Parallel & distributed (multi-tiered) systems - Application design, software development and automated testing - Demonstrated experience on these additional technical skills (Nice to Have) - Unix / Shell scripting - Python / Scala - Message Queuing, Stream processing (Kafka) - Elastic Search - AJAX tools/ Frameworks. - Web services , open API development, and REST concepts - Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit.
Posted 1 week ago
5.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Work from Office
S&P Dow Jones Indices is seeking a Python/Bigdata developer to be a key player in the implementation and support of data Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of EMR Spark workloads using Python, including data access from relational databases and cloud storage technologies. Implement new powerful functionalities using Python, Pyspark, AWS and Delta Lake. Independently come up with optimal designs for the business use cases and implement the same using big data technologies. Enhance existing functionalities in Oracle/Postgres procedures, functions. Performance tuning of existing Spark jobs. Respond to technical queries from operations and product management team. Implement new functionalities in Python, Spark, Hive. Enhance existing functionalities in Postgres procedures, functions. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Respond to technical queries from the operations and product management team. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Information Systems, or Engineering, or equivalent work experience. 5 - 8 years of IT experience in application support or development. Hands on development experience on writing effective and scalable Python programs. Deep understanding of OOP concepts and development models in Python. Knowledge of popular Python libraries/ORM libraries and frameworks. Exposure to unit testing frameworks like Pytest. Good understanding of spark architecture as the system involves data intensive operations. Good amount of work experience in spark performance tuning. Experience/exposure in Kafka messaging platform. Experience in Build technology like Maven, Pybuilder. Exposure with AWS offerings such as EC2, RDS, EMR, lambda, S3,Redis. Hands on experience in at least one relational database (Oracle, Sybase, SQL Server, PostgreSQL). Hands on experience in SQL queries and writing stored procedures, functions. A strong willingness to learn new technologies. Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Experience with microservice and serverless architecture implementation.
Posted 1 week ago
6.0 - 9.0 years
20 - 25 Lacs
Hyderabad
Hybrid
Role & re Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset. Executes and provides feedback for data modeling policies, procedure, processes, and standards. Assists with capturing and documenting system flow and other pertinent technical information about data, database design, and systems. Develop data quality standards and tools for ensuring accuracy. Work across departments to understand new data patterns. Translate high-level business requirements into technical specs sponsibilities Bachelors degree in computer science or engineering. years of experience with data analytics, data modeling, and database design. years of experience with Vertica. years of coding and scripting (Python, Java, Scala) and design experience. years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool. Preferred candidate profile
Posted 1 week ago
6.0 - 8.0 years
10 - 15 Lacs
Hyderabad
Hybrid
Mega Walkin Drive for Senior Software Engineer - Informatica Developer Your future duties and responsibilities: Job Summary: CGI is seeking a skilled and detail-oriented Informatica Developer to join our data engineering team. The ideal candidate will be responsible for designing, developing, and implementing ETL (Extract, Transform, Load) workflows using Informatica PowerCenter (or Informatica Cloud), as well as optimizing data pipelines and ensuring data quality and integrity across systems. Key Responsibilities: Develop, test, and deploy ETL processes using Informatica PowerCenter or Informatica Cloud. Work with business analysts and data architects to understand data requirements and translate them into technical solutions. Integrate data from various sources including relational databases, flat files, APIs, and cloud-based platforms. Create and maintain technical documentation for ETL processes and data flows. Optimize existing ETL workflows for performance and scalability. Troubleshoot and resolve ETL and data-related issues in a timely manner. Implement data validation, transformation, and cleansing techniques. Collaborate with QA teams to support data testing and verification. Ensure compliance with data governance and security policies. Required qualifications to be successful in this role: Minimum 6 years of experience with Informatica PowerCenter or Informatica Cloud. Proficiency in SQL and experience with databases like Oracle, SQL Server, Snowflake, or Teradata. Strong understanding of ETL best practices and data integration concepts. Experience with job scheduling tools like Autosys, Control-M, or equivalent. Knowledge of data warehousing concepts and dimensional modeling. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Good to have Python or any programming knowledge. Bachelors degree in Computer Science, Information Systems, or related field. Preferred Qualifications : Experience with cloud platforms like AWS, Azure, or GCP. Familiarity with Bigdata/ Hadoop tools (e.g., Spark, Hive) and modern data architectures. Informatica certification is a plus. Experience with Agile methodologies and DevOps practices. Skills: Hadoop Hive Informatica Oracle Teradata Unix Notice Period- 0-45 Days Pre requisites : Aadhar Card a copy, PAN card copy, UAN Disclaimer : The selected candidates will initially be required to work from the office for 8 weeks before transitioning to a hybrid model with 2 days of work from the office each week.
Posted 2 weeks ago
4.0 - 6.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in ETL/Bigdata: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 2 weeks ago
4.0 - 6.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in AWS Glue: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 2 weeks ago
5.0 - 8.0 years
10 - 16 Lacs
Pune, Chennai
Work from Office
Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk
Posted 2 weeks ago
5.0 - 10.0 years
5 - 15 Lacs
Chennai
Hybrid
Role & responsibilities Bigdata, Hadoop, Hive, SQL, Cloudera, Impala, Python, Pyspark Fundamentals of: Big data Cloudera Platform Unix Python Expertise in: SQL/HIVE Pyspark Nice to have: Django/Flask frameworks
Posted 3 weeks ago
15.0 - 24.0 years
30 - 40 Lacs
Bengaluru
Work from Office
Position Title: Pro Vice-Chancellor Computer Engineering Background Location: Bengaluru North, Karnataka, India Role Overview: The Pro Vice-Chancellor (PVC) will play a pivotal role in shaping and leading the academic and research vision of technology and engineering schools with a core emphasis on Computer Science and allied disciplines. The position requires an accomplished academician with strong subject knowledge, technological foresight, and the ability to lead cutting-edge research, foster innovation, and build robust academia-industry linkages. Key Responsibilities: Academic and Technical Leadership: Research and Innovation Leadership: Technology Incubation and Start-Up Ecosystem: Academic-Industry Collaboration: Digital Transformation and Smart Campus Initiatives: Internationalization: Faculty and Talent Development: Strategic Planning and Policy Implementation: Eligibility Criteria: Mandatory Qualifications: Engineering Graduation (B.E. / B.Tech), Post-Graduation (M.E. / M.Tech), and Doctorate (Ph.D.) in any one of the following disciplines only: Computer Science Information Science Information Technology Data Science Artificial Intelligence & Machine Learning Note: Candidates with engineering graduation in any other specialization will not be considered. Candidates with qualifications such as B.Sc., BCA, MCA, or other non-engineering degrees will also not be eligible. Experience Requirements: Minimum 15 years of academic experience, including teaching, research, and academic administration. Demonstrated leadership in funded research projects, Ph.D. guidance, patents, and high-impact publications. Experience in establishing or leading research labs, innovation centers, or CoEs. Preferred Attributes: Academic qualifications from premier national/international institutions (e.g., IITs, NITs, IIITs, global universities). Strong industry interface with a track record in consulting, technology advisory, or product development. Global exposure through research collaborations, academic visits, or international program management.
Posted 3 weeks ago
10.0 - 16.0 years
35 - 60 Lacs
Bengaluru
Hybrid
At least 5 years of experience in a complex business environment or international organisation matrix. Must have experience and knowledge in data governance. Strong IT background, including expertise in big data, cloud technology, monitoring solutions, machine learning ML, and artificial intelligence AI. Familiarity with data governance tools such as Collibra (preferred) or similar alternatives. Proven track record in product management, data management, and information technology systems and tools. Experience with the SAFe Agile framework. Knowledge of data analytics/dashboard tools like Qlik and Microsoft PowerBI is a plus. Nice to have experience in travel domain.
Posted 3 weeks ago
5 - 8 years
20 - 25 Lacs
Hyderabad
Hybrid
Role & re Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset. Executes and provides feedback for data modeling policies, procedure, processes, and standards. Assists with capturing and documenting system flow and other pertinent technical information about data, database design, and systems. Develop data quality standards and tools for ensuring accuracy. Work across departments to understand new data patterns. Translate high-level business requirements into technical specs sponsibilities Bachelors degree in computer science or engineering. years of experience with data analytics, data modeling, and database design. years of experience with Vertica. years of coding and scripting (Python, Java, Scala) and design experience. years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool. Preferred candidate profile
Posted 1 month ago
3 - 7 years
3 - 7 Lacs
Bengaluru
Hybrid
Hello everyone , we are hiring for Specialist Software Engineer Bigdata -Snowflake (snowpark ) & scala, python, linux 3 to 7years if any one are interested please share your cv to tjagadishwarachari@primusglobal.com Thanks & Regards, Thanu shree j Associate -TA PRIMUS Global Technologies Pvt. Ltd.
Posted 1 month ago
4 - 7 years
20 - 22 Lacs
Pune, Gurugram
Work from Office
Core skills and Competencies 1. Design, develop, and maintain data pipelines, ETL/ELT processes, and data integrations to support efficient and reliable data ingestion, transformation, and loading. 2. Collaborate with API developers, and other stakeholders to understand data requirements and ensure the availability, reliability, and accuracy of the data. 3. Optimize and tune performance of data processes and workflows to ensure efficient data processing and analysis at scale. 4. Implement data governance practices, including data quality monitoring, data lineage tracking, and metadata management. 5. Work closely with infrastructure and DevOps teams to ensure the scalability, security, and availability of the data platform and data storage systems. 6. Continuously evaluate and recommend new technologies, tools, and frameworks to improve the efficiency and effectiveness of data engineering processes. 7. Collaborate with software engineers to integrate data engineering solutions with other systems and applications. 8. Document and maintain data engineering processes, including data pipeline configurations, job schedules, and monitoring and alerting mechanisms. 9. Stay up-to-date with industry trends and advancements in data engineering, cloud technologies, and data processing frameworks. 10. Provide mentorship and guidance to junior data engineers, promoting best practices in data engineering and ensuring the growth and development of the team. 11. Able to implement and troubleshoot Rest services in Python.
Posted 1 month ago
4 - 6 years
16 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: Pyspark Developer Experience Required :4 to 6 yrs Work Location: Hyderabad/Bangalore/Pune/Chennai/Kochi Required Skills, pyspark/python/spark sql/ETL Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 month ago
4 - 6 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in ETL/Bigdata: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 1 month ago
5 - 8 years
15 - 25 Lacs
Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: Pyspark Developer Experience Required :5 to 8 yrs Work Location :Bangalore Required Skills, Pyspark/SQL Interested candidates can send resumes to nandhini.s@spstaffing.in
Posted 1 month ago
4 - 9 years
5 - 10 Lacs
Pune
Hybrid
Job Description Are you curious, motivated, and forward-thinking? At FIS youll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. About the team In todays highly competitive market, firms must not only deliver superior returns, but also respond to more stringent reporting requirements and increasing demands for information both from within and outside their organization. Throughout the industry there is mounting pressure on organizations to do more, requiring a clear technology strategy that not only addresses the demands of today, but also enables the growth and performance of tomorrow. We are Managed Services team for Insurance having good customer base and providing best service and support to our satisfied clients. What you bring: Real-world experience of working on Bigdata. Basic knowledge of SQL Working knowledge of Spark Knowledge of Scala and Java Knowledge on AWS or any other cloud platform Additional Requirements Ability to work as part of a team as well-being individually motivated. Ability to set correct level expectations and deliver on those expectations. Well-developed written and verbal communications and effective interpersonal skills. Proven self- starter with initiative to seek information and solve problems. Successful within multi-tasking environment. Excellent communication skills. Ability to work with external clients. Qualifications Bachelors degree in computer science or related field of study. Competencies Fluent in English Collaborative Collaborate with different groups and complete the assigned task. Attention to detail – track record of authoring high quality documentation. Organized approach – manage and adapt priorities according to client and internal requirements. Self-starter but team mindset - work autonomously and as part of a global team What we offer you A multifaceted job with a high degree of responsibility and a broad spectrum of opportunities A broad range of professional education and personal development possibilities – FIS is your final career step! A competitive salary and benefits A variety of career development tools, resources and opportunities. With a 50-year history rooted in the financial services industry, FIS™ is the world's largest global provider dedicated to financial technology solutions. We champion clients from banking to capital markets, retail to corporate and everything touched by financial services. Headquartered in Jacksonville, Florida, our 53,000 worldwide employees help serve more than 20,000 clients in over 130 countries. Our technology powers billions of transactions annually that move over $9 trillion around the globe. FIS is a Fortune 500 company and is a member of Standard & Poor’s 500® Index. FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the FIS Online Privacy Notice.
Posted 2 months ago
8 - 12 years
15 - 30 Lacs
Nagpur
Work from Office
RESPONSIBILITIES Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. He/she is hands on. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Codes using programming language used for statistical analysis and modeling such as Python/Spark REQUIRED QUALIFICATIONS Literate in the programming languages used for statistical modeling and analysis, data warehousing and Cloud solutions, and building data pipelines. Proficient in developing notebooks in Data bricks using Python and Spark and Spark SQL. Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure is preferred. Proficient in using Azure Data Factory and other Azure features such as LogicApps. Preferred to have knowledge of Delta lake, Lakehouse and Unity Catalog concepts. Strong understanding of cloud-based data lake systems and data warehousing solutions. Has used AGILE concepts for development, including KANBAN and Scrums Strong understanding of the data interconnections between organizations operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance.
Posted 2 months ago
8 - 12 years
15 - 30 Lacs
Ghaziabad
Work from Office
RESPONSIBILITIES Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. He/she is hands on. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Codes using programming language used for statistical analysis and modeling such as Python/Spark REQUIRED QUALIFICATIONS Literate in the programming languages used for statistical modeling and analysis, data warehousing and Cloud solutions, and building data pipelines. Proficient in developing notebooks in Data bricks using Python and Spark and Spark SQL. Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure is preferred. Proficient in using Azure Data Factory and other Azure features such as LogicApps. Preferred to have knowledge of Delta lake, Lakehouse and Unity Catalog concepts. Strong understanding of cloud-based data lake systems and data warehousing solutions. Has used AGILE concepts for development, including KANBAN and Scrums Strong understanding of the data interconnections between organizations operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance.
Posted 2 months ago
6 - 11 years
8 - 12 Lacs
Bengaluru
Work from Office
As a Global Commercial AI Excellence Lead-Data Science at Novo Nordisk, you will have the opportunity to: Drive innovation and contribute to the development of AI and advanced analytics capabilities within Portfolio Access Strategy and Pricing (PSAP). Collaborate with cross-functional teams to identify opportunities for leveraging AI and analytics to optimize forecasting, payer evidence and pricing strategies. Design and Deploy AI-powered models focused on patient segmentation, risk prediction modelling, price sensitivity and revenue optimization algorithms, new product & inline product forecasting. Build user centric tools or apps using Business Intelligence (BI) technologies to communicate the insights and drive usage across commercial teams. Collaborate with Business Intelligence team and Data Engineering team to provide inputs to drive the development and maintenance of a robust data foundation within PSAP. Qualifications University masters degree in Biostatistics, Mathematics, Economics, Engineering, Computer Science, Information Technology, Life Sciences or equivalent. Masters degree/ bachelors with Minimum 6 years of experience in data science, preferably within FMCG, banking/insurance, pharma or consultancy. Documented experience with data science and Machine learning applications. Experience in programming languages such as Python, SQL, R etc. PySpark, BigData (Hadoop etc.) and advanced analytical tools such as PowerBI or Tableau, ML OPS (Azure or elsewhere) will be preferred. Specialized on forecasting techniques such as Time Series like ARIMA, ARIMAX, Prophet, XGBoost (other GBM techniques), Monte Carlo simulation, Segmentation and Prediction techniques such as Regression, Support Vector Machines, Clustering, Decision trees, Random forests, preferred (ANN, CNN, Deep Neutral Networks) will be preferred. Experience leveraging (Gen)AI and other latest large language models.
Posted 2 months ago
5 - 10 years
35 - 60 Lacs
Bengaluru
Hybrid
Role Purpose A result-oriented Software Developer to develop solutions powered by Fujitsus new processor, which helps solve real-world challenges facing society and businesses across different industries. As a software developer, you are passionate about developing solutions and should be comfortable with back-end coding languages, technologies, frameworks and third-party/opensource libraries. You will play a role in developing innovative advanced solutions and services to support business outcomes. Responsibilities Improve and analyze performance of software applications. Enable and optimize OSS/ISV applications for Fujitsus new processor, starting with AI and Big Data Analytics-related applications. Develop new algorithms for Big Data Analytics frameworks, tuning technologies and work on software based on the proposed approaches using Big Data engineering. Designing and developing data analysis workflows, applications, effective APIs and the system architectures Deploying and Testing applications to ensure functionality, performance, responsiveness, and efficiency. Creating test cases, test plans and automated test scripts for unit tests Troubleshooting, debugging, and fixing bugs and upgrading software/applications. Creating security and data protection settings and measures Writing technical documentation Working and communicating well with product managers, business analysts, and data scientists, and other software developers to collaborate, review and deliver high quality applications. Learning continually, sharing knowledge, and fostering exchange of skills Working using agile methods (planning meetings, review meetings, standup meetings, development, etc.) Working on multiple projects at once while keeping focused on project timeline Experience You will be able to demonstrate that you have: Degree in computer science or relevant field Proven experience as a software developer or similar role, and familiarity with common stacks for software applications using Data analytics technologies. Minimum 4 years experience in production application/solution development Experience in Big Data using Databricks, PySpark etc. and statistical analysis with Data Analytics tools and technologies. Experience with Data Intelligence Platforms, Large Scale Distributed processing with Spark Understanding of Java, JVM, VectorAPI, HotSpot, JNI, Vectorization techniques & SIMD optimization Knowledge of Spark Accelerator frameworks like Comet, Velox, TrinoDB, ClickHouse etc. Knowledge of ETL (AWS Glue, etc.) & database (RDB, Vector DB, etc.) Knowledge related to DWH Ecosystem (Palantir, Snowflake, Databricks, BigQuery, Redshift, etc.) Experience with SQL/Postgres SQL/Python data processing with SW Development Skills in Java/C++/Scala ( especially Java ) Experience in data analysis using parallel processing. (Apache Spark, PySpark etc.) In depth understanding and knowledge of the specified data analytic tools and technologies with Framework engineering experience. Proficiency with fundamental back-end server-side languages such as Python, C/C++, etc. Knowledge of OSS as an alternative to cloud computing for each layer Excellent writing, verbal communication, and teamwork skills Experience in Big Data using Databricks, PySpark etc. and statistical analysis with Data Analytics tools and technologies. Experience in software development in agile approach Excellent writing, verbal communication, and teamwork skills Preferred Experience You will be able to demonstrate that you have: Experience in developing and deploying APIs. Preferred experience in performance tuning & optimization techniques for Data Analytics tools, SQL, and NoSQL databases. Knowledge and experience in Cloud service (Azure/AWS) features such as Functions, VM, Container, DevOps (CI/CD) Great skills in evaluating performance and security of software applications and delivering solutions which are efficient and performant. HPC and computer architecture understanding. Skills in AI-related technology
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2