Jobs
Interviews

3311 Big Data Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

4 - 8 Lacs

Pune

Work from Office

Experience with ETL processes and data warehousing Proficient in SQL

Posted 3 weeks ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad, Pune, Telangana

Work from Office

We have Immediate Openings on Big Data for Contract to Hire role for multiple clients. Job Details Skills Big Data Job type Contract to HIRE Primary Skills 6-8yrs of Experience in working as bigdata developer/supporting environemnts Strong knowledge in Unix/BigData Scripting Strong understanding of BigData (CDP/Hive) Environment Hands-on with GitHub and CI-CD implementations. Attitude to learn / understand ever task doing with reason Ability to work independently on specialized assignments within the context of project deliverable Take ownership of providing solutions and tools that iteratively increase engineering efficiencies. Excellent communication skills & team player Good to have hadoop, Control-M Tooling knowledge. Good to have Automation experience, knowledge of any Monitoring Tools. Role You will work with team handling application developed using Hadoop/CDP, Hive. You will work within the Data Engineering team and with the Lead Hadoop Data Engineer and Product Owner. You are expected to support existing application as well as design and build new Data Pipelines. You are expected to support Evergreening or upgrade activities of CDP/SAS/Hive You are expected to participate in the service management if application Support issue resolution and improve processing performance /avoid issue reoccurring Ensure the use of Hive, Unix Scripting, Control-M reduces lead time to delivery Support application in UK shift as well as on-call support over night/weekend This is mandatory Working Hours UK Shift - One week per Month On Call - One week per Month.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 - 0 Lacs

Pune

Hybrid

So, what’s the role all about? Within Actimize, the AI and Analytics Team is developing the next generation advanced analytical cloud platform that will harness the power of data to provide maximum accuracy for our clients’ Financial Crime programs. As part of the PaaS/SaaS development group, you will be responsible for developing this platform for Actimize cloud-based solutions and to work with cutting edge cloud technologies. How will you make an impact? NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions & has been consistently ranked as number one in the space At NICE Actimize, we recognize that every employee’s contributions are integral to our company’s growth and success. To find and acquire the best and brightest talent around the globe, we offer a challenging work environment, competitive compensation, and benefits, and rewarding career opportunities. Come share, grow and learn with us – you’ll be challenged, you’ll have fun and you’ll be part of a fast growing, highly respected organization. This new SaaS platform will enable our customers (some of the biggest financial institutes around the world) to create solutions on the platform to fight financial crime. Have you got what it takes? Design, implement, and maintain real-time and batch data pipelines for fraud detection systems. Automate data ingestion from transactional systems, third-party fraud intelligence feeds, and behavioral analytics platforms. Ensure high data quality, lineage, and traceability to support audit and compliance requirements. Collaborate with fraud analysts and data scientists to deploy and monitor machine learning models in production. Monitor pipeline performance and implement alerting for anomalies or failures. Ensure data security and compliance with financial regulations Qualifications: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. 4-6 years of experience in DataOps role, preferably in fraud or risk domains. Strong programming skills in Python and SQL. Knowledge of financial fraud patterns, transaction monitoring, and behavioral analytics. Familiarity with fraud detection systems, rules engines, or anomaly detection frameworks. Experience with AWS cloud platforms Understanding of data governance, encryption, and secure data handling practices. Experience with fraud analytics tools or platforms like Actimize What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7822 Reporting into: Director Role Type: Tech Manager

Posted 3 weeks ago

Apply

5.0 - 8.0 years

11 - 21 Lacs

Pune

Work from Office

This role is accountable to develop, expand and optimize Data Management Architecture, Design & Implementation under Singtel Data Platform & Management Design, develop and implement data governance and management solution, data quality, Privacy, protection & associated control technology solutions as per best industry practice. Review, evaluate and implement Data Management standards primarily Data Classification, Data Retention across systems. Design, develop and implement Automated Data Discovery rules to identify presence of PII attributes. Drive development, optimization, testing and tooling to improve overall data control management (Security, Data Privacy, protection, Data Quality) Review, analyze, benchmark, and approve solution design from product companies, internal teams, and vendors. Ensure that proposed solutions are aligned and conformed to the data landscape, big data architecture guidelines and roadmap. SECTION B: KEY RESPONSIBILITIES AND RESULTS 1 Design and implement data management standards like Catalog Management, Data Quality, Data Classification, Data Retention 2 Drive BAU process, testing and tooling to improve data security, privacy, and protection 3 Identify, design, and implement internal process improvements: automating manual processes, control and optimizing data technology service delivery. 4 Implement and support Data Management Technology solution throughout lifecycle like user onboarding, upgrades, fixes, access management etc.. SECTION C: QUALIFICATIONS / EXPERIENCE / KNOWLEDGE REQUIRED Category Essential for this role Education and Qualifications Diploma in Data Analytics, Data Engineering, IT, Computer Science, Software Engineering, or equivalent. Work Experience Exposure to Data Management and Big Data Concepts Knowledge and experience in Data Management, Data Integration, Data Quality products Technical Skills Informatica CDGC, Collibra, Alatian Informatica Data Quality, Data Privacy Management Azure Data Bricks This role is accountable to develop, expand and optimize Data Management Architecture, Design & Implementation under Singtel Data Platform & Management Design, develop and implement data governance and management solution, data quality, Privacy, protection & associated control technology solutions as per best industry practice. Review, evaluate and implement Data Management standards primarily Data Classification, Data Retention across systems. Design, develop and implement Automated Data Discovery rules to identify presence of PII attributes. Drive development, optimization, testing and tooling to improve overall data control management (Security, Data Privacy, protection, Data Quality) Review, analyze, benchmark, and approve solution design from product companies, internal teams, and vendors. Ensure that proposed solutions are aligned and conformed to the data landscape, big data architecture guidelines and roadmap. SECTION B: KEY RESPONSIBILITIES AND RESULTS 1 Design and implement data management standards like Catalog Management, Data Quality, Data Classification, Data Retention 2 Drive BAU process, testing and tooling to improve data security, privacy, and protection 3 Identify, design, and implement internal process improvements: automating manual processes, control and optimizing data technology service delivery. 4 Implement and support Data Management Technology solution throughout lifecycle like user onboarding, upgrades, fixes, access management etc.. SECTION C: QUALIFICATIONS / EXPERIENCE / KNOWLEDGE REQUIRED Category Essential for this role Education and Qualifications Diploma in Data Analytics, Data Engineering, IT, Computer Science, Software Engineering, or equivalent. Work Experience Exposure to Data Management and Big Data Concepts Knowledge and experience in Data Management, Data Integration, Data Quality products Technical Skills Informatica CDGC, Collibra, Alatian Informatica Data Quality, Data Privacy Management Azure Data Bricks

Posted 3 weeks ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Noida, Pune, Bengaluru

Work from Office

We are hiring BigData Lead for one of our client based in Noida / Indore / Bangalore / Hyderabad / Pune for a full time position and willing to hire immediately. Please share resume to rana@enormousenterprise.in / anjana@enormousenterprise.in Interview Mode Video Interview followed by Video interview or F2F interview Experience Level - 7 to 10 Years Minimum 2-5 Years of Lead Experience. Position Summary: We are looking for candidates with hands-on experience in Big Data or Cloud Technologies. Must have technical Skills 7 to 10 Years of experience Data Ingestion, Processing and Orchestration knowledge Expertise and hands-on experience on Spark DataFrame, and Hadoop echo system components – Must Have Good and hand-on experience* of any of the Cloud (AWS/Azure/GCP) – Must Have Good knowledge of PySpark (SparkSQL) – Must Have Good knowledge of Shell script & Python – Good to Have Good knowledge of SQL – Good to Have Good knowledge of migration projects on Hadoop – Good to Have Good Knowledge of one of the Workflow engine like Oozie, Autosys – Good to Have Good knowledge of Agile Development– Good to Have Passionate about exploring new technologies – Good to Have Automation approach - – Good to Have Good Communication Skills – Must Have Roles & Responsibilities Lead technical implementation of Data Warehouse modernization projects for Impetus Design and development of applications on Cloud technologies Lead technical discussions with internal & external stakeholders Resolve technical issues for team Ensure that team completes all tasks & activities as planned Code Development

Posted 3 weeks ago

Apply

7.0 - 10.0 years

27 - 32 Lacs

Pune

Hybrid

Job Title: Big Data Developer Job Location: Pune Experience : 7+ Years Job Type: Hybrid. Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should be able to write the query through Jupyter Notebook Orchestration tool like NiFi, Airflow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

18 - 22 Lacs

Bengaluru

Work from Office

Roles and Responsibilities: Development and implementation of DBT models, ensuring efficient data transformation workflows. Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate them into robust DBT solutions. Optimize DBT pipelines for performance, scalability, and maintainability. Enforce best practices in version control, testing, and documentation within the DBT environment. Monitor and troubleshoot DBT workflows to ensure reliability and timely delivery of data products. Provide guidance and mentorship to the team on DBT practices and advanced modeling techniques. Stay updated on the latest DBT features and incorporate them into the data transformation ecosystem. Critical Skills to Possess: Snowflake and DBT 7+ years of experience Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Development and implementation of DBT models, ensuring efficient data transformation workflows. Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate them into robust DBT solutions. Optimize DBT pipelines for performance, scalability, and maintainability. Enforce best practices in version control, testing, and documentation within the DBT environment. Monitor and troubleshoot DBT workflows to ensure reliability and timely delivery of data products. Provide guidance and mentorship to the team on DBT practices and advanced modeling techniques. Stay updated on the latest DBT features and incorporate them into the data transformation ecosystem. Critical Skills to Possess: Snowflake and DBT 7+ years of experience Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience

Posted 3 weeks ago

Apply

10.0 - 12.0 years

27 - 32 Lacs

Gurugram

Work from Office

Key Responsibilities : As an Enterprise Data Architect, you will : - Lead Data Architecture : Design, develop, and implement comprehensive enterprise data architectures, primarily leveraging Azure and Snowflake platforms. - Data Transformation & ETL : Oversee and guide complex data transformation and ETL processes for large and diverse datasets, ensuring data integrity, quality, and performance. - Customer-Centric Data Design : Specialize in designing and optimizing customer-centric datasets from various sources, including CRM, Call Center, Marketing, Offline, and Point of Sale systems. - Data Modeling : Drive the creation and maintenance of advanced data models, including Relational, Dimensional, Columnar, and Big Data models, to support analytical and operational needs. - Query Optimization : Develop, optimize, and troubleshoot complex SQL and NoSQL queries to ensure efficient data retrieval and manipulation. - Data Warehouse Management : Apply advanced data warehousing concepts to build and manage high-performing, scalable data warehouse solutions. - Tool Evaluation & Implementation : Evaluate, recommend, and implement industry-leading ETL tools such as Informatica and Unifi, ensuring best practices are followed. - Business Requirements & Analysis : Lead efforts in business requirements definition and management, structured analysis, process design, and use case documentation to translate business needs into technical specifications. - Reporting & Analytics Support : Collaborate with reporting teams, providing architectural guidance and support for reporting technologies like Tableau and PowerBI. - Software Development Practices : Apply professional software development principles and best practices to data solution delivery. - Stakeholder Collaboration : Interface effectively with sales teams and directly engage with customers to understand their data challenges and lead them to successful outcomes. - Project Management & Multi-tasking : Demonstrate exceptional organizational skills, with the ability to manage and prioritize multiple simultaneous customer projects effectively. - Strategic Thinking & Leadership : Act as a self-managed, proactive, and customer-focused leader, driving innovation and continuous improvement in data architecture. Position Requirements : - 10+ years of strong experience with data transformation & ETL on large data sets. - Experience with designing customer-centric datasets (i.e., CRM, Call Center, Marketing, Offline, Point of Sale, etc.). - 5+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data). - 5+ years of complex SQL or NoSQL experience. - Extensive experience in advanced Data Warehouse concepts. - Proven experience with industry ETL tools (i.e., Informatica, Unifi). - Solid experience with Business Requirements definition and management, structured analysis, process design, and use case documentation. - Experience with Reporting Technologies (i.e., Tableau, PowerBI). - Demonstrated experience in professional software development. - Exceptional organizational skills and ability to multi-task simultaneous different customer projects. - Strong verbal & written communication skills to interface with sales teams and lead customers to successful outcomes. - Must be self-managed, proactive, and customer-focused. Technical Skills : - Cloud Platforms : Microsoft Azure - Data Warehousing : Snowflake - ETL Methodologies : Extensive experience in ETL processes and tools - Data Transformation : Large-scale data transformation - Data Modeling : Relational, Dimensional, Columnar, Big Data - Query Languages : Complex SQL, NoSQL - ETL Tools : Informatica, Unifi (or similar enterprise-grade tools) - Reporting & BI : Tableau, PowerBI

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Noida

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 weeks ago

Apply

10.0 - 12.0 years

20 - 25 Lacs

Noida

Work from Office

Key Responsibilities : As an Enterprise Data Architect, you will : - Lead Data Architecture : Design, develop, and implement comprehensive enterprise data architectures, primarily leveraging Azure and Snowflake platforms. - Data Transformation & ETL : Oversee and guide complex data transformation and ETL processes for large and diverse datasets, ensuring data integrity, quality, and performance. - Customer-Centric Data Design : Specialize in designing and optimizing customer-centric datasets from various sources, including CRM, Call Center, Marketing, Offline, and Point of Sale systems. - Data Modeling : Drive the creation and maintenance of advanced data models, including Relational, Dimensional, Columnar, and Big Data models, to support analytical and operational needs. - Query Optimization : Develop, optimize, and troubleshoot complex SQL and NoSQL queries to ensure efficient data retrieval and manipulation. - Data Warehouse Management : Apply advanced data warehousing concepts to build and manage high-performing, scalable data warehouse solutions. - Tool Evaluation & Implementation : Evaluate, recommend, and implement industry-leading ETL tools such as Informatica and Unifi, ensuring best practices are followed. - Business Requirements & Analysis : Lead efforts in business requirements definition and management, structured analysis, process design, and use case documentation to translate business needs into technical specifications. - Reporting & Analytics Support : Collaborate with reporting teams, providing architectural guidance and support for reporting technologies like Tableau and PowerBI. - Software Development Practices : Apply professional software development principles and best practices to data solution delivery. - Stakeholder Collaboration : Interface effectively with sales teams and directly engage with customers to understand their data challenges and lead them to successful outcomes. - Project Management & Multi-tasking : Demonstrate exceptional organizational skills, with the ability to manage and prioritize multiple simultaneous customer projects effectively. - Strategic Thinking & Leadership : Act as a self-managed, proactive, and customer-focused leader, driving innovation and continuous improvement in data architecture. Position Requirements : of strong experience with data transformation & ETL on large data sets. - Experience with designing customer-centric datasets (i.e., CRM, Call Center, Marketing, Offline, Point of Sale, etc.). - 5+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data). - 5+ years of complex SQL or NoSQL experience. - Extensive experience in advanced Data Warehouse concepts. - Proven experience with industry ETL tools (i.e., Informatica, Unifi). - Solid experience with Business Requirements definition and management, structured analysis, process design, and use case documentation. - Experience with Reporting Technologies (i.e., Tableau, PowerBI). - Demonstrated experience in professional software development. - Exceptional organizational skills and ability to multi-task simultaneous different customer projects. - Strong verbal & written communication skills to interface with sales teams and lead customers to successful outcomes. - Must be self-managed, proactive, and customer-focused. Technical Skills : - Cloud Platforms : Microsoft Azure - Data Warehousing : Snowflake - ETL Methodologies : Extensive experience in ETL processes and tools - Data Transformation : Large-scale data transformation - Data Modeling : Relational, Dimensional, Columnar, Big Data - Query Languages : Complex SQL, NoSQL - ETL Tools : Informatica, Unifi (or similar enterprise-grade tools) - Reporting & BI : Tableau, PowerBI

Posted 3 weeks ago

Apply

8.0 - 13.0 years

8 - 13 Lacs

Telangana

Work from Office

Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.

Posted 3 weeks ago

Apply

12.0 - 20.0 years

35 - 45 Lacs

Gurugram

Work from Office

Role & responsibility Type of profiles we are looking for - 10+ years of experience in driving large data programs in Banking. 8-10 years of Experience in implementing Data Governance frameworks. In depth understanding of RDAR, BCBS 239, Financial & Non-Financial risks. Experience in data engineering and good understanding of ETL & data platforms. Experience in Risk Regulatory & Data programs. Experience of creating data architectures in GCP. Working knowledge of databrics. BFS domain experience is a must Good Communication skills. Must visit office 3 days a week Key day to day responsibilities of the candidate Work with client technology partners. Be the link b/w engineering team & business stakeholders. Take reporting & data aggregation requirements from business and liaison with Tech to integrate the logical data models into Datahub. Assist the client tech teams in building new data platform. Experience in building data models, quality controls and data profiling. Good to have Understanding of ServiceNow. Worked on BCBS 239. Project management experience

Posted 3 weeks ago

Apply

3.0 - 6.0 years

9 - 13 Lacs

Noida

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

8 - 12 Lacs

Pune

Hybrid

So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 5+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted 3 weeks ago

Apply

7.0 - 12.0 years

7 - 13 Lacs

Pune, Chennai, Bengaluru

Work from Office

ETL Tester with 3.6+ Years of BigData experience • Utilize Big Data technologies such as Hadoop or Spark or Hive and related ecosystem components for testing. • Write and execute complex queries in SQL or HiveQL or other query languages to validate data processing and transformation. • Maintain regression test suites specific to Big Data applications. • Collaborate with cross-functional teams to understand project requirements and Big Data system specifications. • Develop detailed test plans, test cases, and test scripts tailored to Big Data testing needs. • Execute manual test cases to verify the correctness and completeness of data transformation processes • Validate data ingestion and extraction procedures, ensuring data accuracy and consistency. • Identify and document defects, anomalies, and data quality issues. • Perform regression testing to ensure that previously identified defects have been resolved and new changes have not introduced new issues. • Collaborate closely with developers, data engineers, and data scientists to understand data processing logic and resolve issues. • Effectively communicate test results, progress, and potential risks to project stakeholders. • Good in Jira and Testing methodologies

Posted 3 weeks ago

Apply

7.0 - 12.0 years

7 - 13 Lacs

Pune, Chennai, Bengaluru

Work from Office

ETL Tester with 3.6+ Years of BigData experience • Utilize Big Data technologies such as Hadoop or Spark or Hive and related ecosystem components for testing. • Write and execute complex queries in SQL or HiveQL or other query languages to validate data processing and transformation. • Maintain regression test suites specific to Big Data applications. • Collaborate with cross-functional teams to understand project requirements and Big Data system specifications. • Develop detailed test plans, test cases, and test scripts tailored to Big Data testing needs. • Execute manual test cases to verify the correctness and completeness of data transformation processes • Validate data ingestion and extraction procedures, ensuring data accuracy and consistency. • Identify and document defects, anomalies, and data quality issues. • Perform regression testing to ensure that previously identified defects have been resolved and new changes have not introduced new issues. • Collaborate closely with developers, data engineers, and data scientists to understand data processing logic and resolve issues. • Effectively communicate test results, progress, and potential risks to project stakeholders. • Good in Jira and Testing methodologies

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

4+ years of experience in software development using Java/J2EE technologies.Exposure to Microservices and RESTFul API development with Java, Spring Framework.4+ years of experience in database technologies with exposure to NoSQL technologies.4 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC).Working experience with frontend technology like ReactJS or any other JavaScript frameworks.

Posted 3 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Data Engineer Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP & BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) Skills that would add advantage : DBT, Kafka Experience level : 4 5 years NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.

Posted 3 weeks ago

Apply

8.0 - 15.0 years

30 - 35 Lacs

Bengaluru

Work from Office

JD Should have good experience in DB testing, writing Complex SQLs Should have worked on ETL applications or BigData technologies . Hands on experience in building automation frameworks for Database testing and data pipeline using Python. Should have worked on Python + Pytest/Robot automation frameworks. Should be proficient in Web Testing and have working automation experience in UI Testing

Posted 3 weeks ago

Apply

6.0 - 10.0 years

10 - 16 Lacs

Hyderabad

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Big Data Engineer (Scala , AWS) Experience : 6 to 10 years Key Responsibilities : Design, develop, and optimize scalable big data pipelines using Apache Spark and Scala. Build batch and real-time data processing workflows to ingest, transform, and aggregate large datasets. Write high-performance SQL queries to support data analysis and reporting. Collaborate with data architects, data scientists, and business stakeholders to understand requirements and deliver high-quality data solutions. Ensure data quality, integrity, and governance across systems. Participate in code reviews and maintain best practices in data engineering. Troubleshoot and optimize performance of Spark jobs and SQL queries. Monitor and maintain production data pipelines and perform root cause analysis of data issues. Technical Skills : 6 to10 years of overall experience in software/data engineering. 4+ years of hands-on experience with Apache Spark using Scala. Strong proficiency in Scala and functional programming concepts. Extensive experience with SQL (preferably in distributed databases like Hive, Presto, Snowflake, or BigQuery). Experience working in Hadoop ecosystem (HDFS, Hive, HBase, Oozie, etc.). Knowledge of data modeling, data architecture, and ETL frameworks. Familiarity with version control (Git), CI/CD pipelines, and DevOps practices. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Notice period : Till 60 days Location : Hyderabad Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number: 8067432491 rathy@blackwhite.in |www.blackwhite.in

Posted 3 weeks ago

Apply

4.0 - 7.0 years

7 - 14 Lacs

Gurugram

Work from Office

Must have : Bigdata ,GCP Roles & Responsibilities Must have : Bigdata ,GCP Tags Bigdata, GCP Years Of Experience 4 to 7 Years The candidate should have extensive production experience (1-2 Years ) in GCP, Other cloud experience would be a strong bonus. - Strong background in Data engineering 4-5 Years of exp in Big Data technologies including, Hadoop, NoSQL, Spark, Kafka etc. - Exposure to Production application is a must and Operating knowledge of cloud computing platforms (GCP, especially Big Query, Dataflow, Dataproc, Storage, VMs, Networking, Pub Sub, Cloud Functions, Composer servics) Role & responsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

6.0 - 10.0 years

5 - 15 Lacs

Hyderabad, Pune, Chennai

Work from Office

Notice Period: - 0 - 30 Days Required Skills: - 1. Big Data / Hadoop, Spark, Scala, SQL, Kafka, Unix & Shell Script Responsible for Design, Build & deployment the Solution in Bigdata. 2. Ability to effectively use complex analytical, interpretive and problem solving techniques. 3. Analytical, flexible, team-oriented and has good interpersonal/communication skills. 4. Apply Internal Standards for re-use, Architecture, Testing and general best practices. 5. Responsible for full software Development Life Cycle. 6. Responsible for the on-time delivery of high-quality code with low rates of production defects. 7. Research and recommend Technology to improve the current systems. 8. Communicate status and risk to stakeholders and escalate as appropriate. 9. Flexible and able to manage time effectively

Posted 3 weeks ago

Apply

6.0 - 10.0 years

5 - 15 Lacs

Hyderabad, Chennai

Work from Office

Notice Period: - 0 - 30 Days Required Skills: - 1. Big Data / Hadoop, Spark, Scala, SQL, Kafka, Unix & Shell Script Responsible for Design, Build & deployment the Solution in Bigdata. 2. Ability to effectively use complex analytical, interpretive and problem solving techniques. 3. Analytical, flexible, team-oriented and has good interpersonal/communication skills. 4. Apply Internal Standards for re-use, Architecture, Testing and general best practices. 5. Responsible for full software Development Life Cycle. 6. Responsible for the on-time delivery of high-quality code with low rates of production defects. 7. Research and recommend Technology to improve the current systems. 8. Communicate status and risk to stakeholders and escalate as appropriate. 9. Flexible and able to manage time effectively

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

At Goldman Sachs, as an Engineer, you don't just make things - you make things possible. Your role involves connecting people and capital with ideas to bring about change in the world. You will be tasked with solving complex engineering problems for clients, building massively scalable software and systems, designing low latency infrastructure solutions, proactively protecting against cyber threats, and utilizing machine learning in conjunction with financial engineering to transform data into actionable insights. By joining our engineering teams, you will have the opportunity to create new businesses, revolutionize the field of finance, and explore a realm of possibilities at the pace of the markets. Engineering at Goldman Sachs is a pivotal component of our business, encompassing our Technology Division and global strategists groups. Our dynamic environment demands innovative strategic thinking and prompt, practical solutions. If you are eager to push the boundaries of digital potential, this is the place to begin. As a part of the Bengaluru Engineering Management and Strategy (EMS) team, you will play a crucial role within the regional management team in Hyderabad and the Engineering Division in India, reporting to the lead of Hyderabad EMS/Engineering leadership. Your responsibilities will include co-leading Engineering initiatives in India, especially focusing on talent management aspects such as recruitment, people development, retention, branding through external and internal events, and facilitating cross-divisional initiatives related to risk and resiliency, automation, and skill development. Additionally, you will be involved in process-oriented activities such as budgeting, business continuity planning, capacity/seating management, vendor engagement, and governance/controls to effectively manage the growth of the organization. The ideal candidate for this role will possess the ability to establish strong global and regional relationships, cultivate robust vendor partnerships, and build diverse teams that embody the culture of Goldman Sachs. You should also demonstrate a commitment to consistent processes, manage risks and uphold the firm's reputation with foresight, and lead senior governance forums to formulate strategies and drive decisions for the office. Your daily activities will involve collaborating closely with the EMS Lead/India Engineering leadership and the regional management team to define and communicate the office's identity and vision for Goldman Sachs Engineering in India. You will work with Human Capital Management (HCM) and hiring managers to support talent management initiatives and expedite Engineering recruitment processes. Establishing connections with the external ecosystem, including industry forums, academic institutions, Engineering firms, startups, and vendor partners, will be a key aspect of your role. Additionally, you will support the execution of strategic priorities outlined by the India Engineering leadership team, serving as a trusted proxy to ensure consistency in messaging and adherence to policies and expectations of all staff in the region. Acting as a central point of communication, coordination, and information flow for the India Engineering leadership team, you will ensure coherence across various routine and ad hoc administrative tasks. Furthermore, you will be responsible for facilitating leadership and regional Engineering forums and meetings, planning and managing agendas, content, and follow-ups. Developing engaging presentations and internal communications to articulate the Engineering strategy and other leadership messages will be part of your role. You will address both short-term, ad hoc requests and engage in longer-term analyses and projects to drive continuous improvement and innovation. In summary, your responsibilities will span across program/project management, regional initiatives, firmwide initiatives, organizational awareness, talent management, risk management, and incident management. You will need to leverage your skills in strategic thinking and planning, planning and execution, critical and analytical thinking, influencing and negotiation, judgment and problem-solving, creativity and innovation, influencing outcomes, communication, client and business focus, drive and motivation, functional expertise, and branding awareness to excel in this role. Basic qualifications for this position include experience in implementing technology strategies in global firms, exceptional influencing skills at all levels, strong analytical abilities, self-motivation, excellent process and project management skills, the capacity to handle multiple time-sensitive projects with a focus on quality, proactive attitude, decision-making skills, quick learning abilities, and proficiency in program management and MS Office tools. Preferred qualifications involve being well-versed in the global technology landscape and emerging trends, experience in business continuity planning or similar emergency scenario planning and reaction management, familiarity with Enterprise Resource Management, Project Planning, and Expense Management applications. At Goldman Sachs, we are dedicated to utilizing our resources to aid our clients, shareholders, and the communities we serve in their growth. Established in 1869, we are a prominent global investment banking, securities, and investment management firm headquartered in New York, with offices worldwide. We believe that fostering diversity and inclusion not only enhances who you are but also improves your performance. We are committed to promoting diversity and inclusion within our firm and beyond by offering numerous opportunities for personal and professional growth, from training and development programs to firmwide networks, benefits, wellness programs, and mindfulness initiatives. To learn more about our culture, benefits, and team at GS.com/careers. We are committed to providing reasonable accommodations for candidates with special needs or disabilities during our recruitment process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html In conclusion, the role at Goldman Sachs offers you the chance to be part of a dynamic and innovative environment where you can contribute to shaping the future of engineering and finance.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of yourself. We are counting on your unique voice and perspective to help EY become even better. Join us and build an exceptional experience for yourself, and contribute to creating a better working world for all. As an Assistant Manager in AI/Gen AI within the Data and Analytics team, you will be part of a multi-disciplinary technology team that delivers client projects and solutions across Data Mining & Management, Visualization, Business Analytics, Automation, Statistical Insights, and AI/GenAI. The assignments you work on will cover a wide range of countries and industry sectors. Your key responsibilities include: - Developing, reviewing, and implementing solutions using AI, Machine Learning, Deep Learning, and Python programming. - Leading the development and implementation of Generative AI applications using both open source and closed source Large Language Models (LLMs). - Working with advanced models for natural language processing and creative content generation. - Designing and optimizing solutions leveraging Vector databases for efficient storage and retrieval of contextual data for LLMs. - Identifying opportunities for analytics applications within various industry sectors. - Managing projects and ensuring smooth service delivery. - Collaborating with cross-functional teams and stakeholders. Skills and attributes we are looking for: - Ability to work creatively and systematically in a time-limited, problem-solving environment - High ethical standards and reliability - Curiosity, creativity, and openness to new ideas - Good interpersonal and communication skills - Experience in working with multi-cultural teams and managing multiple priorities simultaneously To qualify for this role, you must have: - Experience in guiding teams on AI/Data Science projects and communicating results to clients - Familiarity with implementing solutions in Azure Cloud Framework - Presentation skills and 6-8 years of relevant work experience in developing and implementing Agentic GenAI/AI, Machine Learning Models - Proficiency in Python programming and experience with statistical techniques, deep learning, and machine learning algorithms - Familiarity with the software development life cycle and product development principles Ideally, you will also have: - Strategic thinking abilities and a customer-focused mindset - Ability to build rapport and trust with clients - Willingness to travel extensively and work on client sites/practice office locations What we offer: EY Global Delivery Services (GDS) provides a dynamic and truly global delivery network with fulfilling career opportunities and exciting projects. You will have the chance to collaborate with EY teams worldwide and work on projects with well-known brands. Continuous learning, success defined by you, transformative leadership, and a diverse and inclusive culture are all part of what EY offers to its employees. Join us at EY to be a part of a team that is committed to building a better working world by creating long-term value for clients, people, and society, and by building trust in the capital markets.,

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies