":" Job Description: We are looking for a highly skilled Senior Data Scientist with 39 years of experience specializing in Python, Large Language Models (LLMs), NLP, Machine Learning, and Generative AI . The ideal candidate will have a deep understanding of building intelligent systems using modern AI frameworks and deploying them into scalable, production-grade environments. You will work closely with cross-functional teams to build innovative AI solutions that deliver real business value. Responsibilities: Design, develop, and deploy ML/NLP solutions using Python and state-of-the-art AI frameworks. Apply LLMs and Generative AI techniques to solve real-world problems. Build, train, fine-tune, and evaluate models for NLP and GenAI tasks. Collaborate with data engineers, MLOps, and product teams to operationalize models. Contribute to the development of scalable AI services and applications. Analyze large datasets to extract insights and support model development. Maintain clean, modular, and version-controlled code using Git. Requirements Must-Have Skills: 310 years of hands-on experience with Python for data science and ML applications. Strong expertise in Machine Learning algorithms and model development. Proficient in Natural Language Processing (NLP) and text analytics. Experience with Large Language Models (LLMs) and Generative AI frameworks (e.g., LangChain, Hugging Face Transformers). Familiarity with model deployment and real-world application integration. Experience with version control systems like Git . Good to Have: Experience with PySpark for distributed data processing. Exposure to MLOps practices and model lifecycle management. Familiarity with cloud platforms such as AWS, GCP, or Azure. Knowledge of vector databases (e.g., FAISS, Pinecone) and embeddings. Educational Qualification: Bacheloror Masterdegree in Computer Science, Data Science, Statistics, or a related field. Benefits Work with cutting-edge technologies in a collaborative and forward-thinking environment. Opportunities for continuous learning, skill development, and career growth. Exposure to high-impact projects in AI and data science. ","
We are seeking a highlyskilled and experienced Senior Data Engineer to lead the end-to-end developmentof complex models for compliance and supervision. The ideal candidate will havedeep expertise in cloud-based infrastructure, ETL pipeline development, andfinancial domains, with a strong focus on creating robust, scalable, andefficient solutions. Key Responsibilities: -ModelDevelopment: Lead the development of advanced models using AWS services such asEMR, Glue, and Glue Notebooks. -CloudInfrastructure: Design, build, and optimize scalable cloud infrastructuresolutions with a minimum of 5 years of experience. -ETL PipelineDevelopment: Create, manage, and optimize ETL pipelines using PySpark forlarge-scale data processing. -CI/CDImplementation: Build and maintain CI/CD pipelines for deploying andmaintaining cloud-based applications. -Data Analysis:Perform detailed data analysis and deliver actionable insights to stakeholders. -Collaboration:Work closely with cross-functional teams to understand requirements, presentsolutions, and ensure alignment with business goals. -AgileMethodology: Operate effectively in agile or hybrid agile environments,delivering high-quality results within tight deadlines. -FrameworkDevelopment: Enhance and expand existing frameworks and capabilities to supportevolving business needs. -Documentation andCommunication: Create clear documentation and present technical solutions toboth technical and non-technical audiences. Requirements Required Qualifications: -05+ years ofexperience with Python programming. -5+ years ofexperience in cloud infrastructure, particularly AWS. -3+ years ofexperience with PySpark, including usage with EMR or Glue Notebooks. -3+ years ofexperience with Apache Airflow for workflow orchestration. -Solid experiencewith data analysis in fast-paced environments. -Strongunderstanding of capital markets, financial systems, or prior experience in thefinancial domain is a must. -Proficiency withcloud-native technologies and frameworks. -Familiarity withCI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. -Experience withnotebooks (e.g., Jupyter, Glue Notebooks) for interactive development. -Excellentproblem-solving skills and ability to handle complex technical challenges. -Strongcommunication and interpersonal skills for collaboration across teams andpresenting solutions to diverse audiences. -Ability to thrivein a fast-paced, dynamic environment. Benefits Standard Company Benefits ","
Job Overview: We seek a highly skilled Java Full Stack Developer who is comfortable with frontend and backend development. The ideal candidate will be responsible for developing and designing frontend web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. The Java Full Stack Developer will be required to see out a project from conception to final product, requiring good organizational skills and attention to detail. Key Responsibilities: Frontend Development: Design and develop user-facing web applications using modern frontend languages like HTML, CSS, and JavaScript and frameworks like React.js, Angular, or Vue.js. Backend Development: Build and maintain server-side application logic using languages such as Node.js, Python, Ruby, Java, or PHP, and manage database interactions with MySQL, PostgreSQL, MongoDB, or other database systems. API Development and Integration: Develop and integrate RESTful APIs to connect frontend and backend components, ensuring smooth data flow and communication between different parts of the application. Database Management: Design, implement, and manage databases, ensuring data integrity, security, and optimal performance. Version Control and Collaboration: Use Git and other version control systems to track code changes and collaborate with other team developers. Deployment and DevOps: Automate deployment processes, manage cloud infrastructure, and ensure the scalability and reliability of applications through CI/CD pipelines. Security Implementation: Implement security best practices to protect the application from vulnerabilities, including authentication, authorization, and data encryption . Cross-Platform Optimization: Ensure the application is responsive and optimized for different devices, platforms, and browsers. Troubleshooting and Debugging: Identify, diagnose, and fix bugs and performance issues in the application, ensuring a smooth user experience. Collaboration and Communication: Work closely with product managers, designers, and other stakeholders to understand requirements and deliver solutions that meet business needs. Continuous Learning: Stay updated with the latest technologies, frameworks, and industry trends to improve development practices continuously. Requirements Technical Skills: Proficiency in frontend technologies like HTML, CSS, JavaScript, and frameworks like React.js, Angular, or Vue.js. Strong backend development experience with Node.js, Python, Java, or similar languages. Hands-on experience with databases like MySQL, PostgreSQL, MongoDB, or similar. Familiarity with version control systems, notably Git. Experience with cloud services like AWS, Azure, or Google Cloud. Knowledge of CI/CD pipelines and DevOps practices. Understanding of security principles and how to apply them to web applications. Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work collaboratively in a team environment. Ability to manage multiple tasks and projects simultaneously. Eagerness to learn new technologies and improve existing skills.
We are seeking a highly skilled and experienced Senior Data Engineer to lead the end-to-end development of complex models for compliance and supervision. The ideal candidate will have deep expertise in cloud-based infrastructure, ETL pipeline development, and financial domains, with a strong focus on creating robust, scalable, and efficient solutions. Key Responsibilities: -Model Development: Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. -Cloud Infrastructure: Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. -ETL Pipeline Development: Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing. -CI/CD Implementation: Build and maintain CI/CD pipelines for deploying and maintaining cloud-based applications. -Data Analysis: Perform detailed data analysis and deliver actionable insights to stakeholders. -Collaboration: Work closely with cross-functional teams to understand requirements, present solutions, and ensure alignment with business goals. -Agile Methodology: Operate effectively in agile or hybrid agile environments, delivering high-quality results within tight deadlines. -Framework Development: Enhance and expand existing frameworks and capabilities to support evolving business needs. -Documentation and Communication: Create clear documentation and present technical solutions to both technical and non-technical audiences. Requirements Requirements Required Qualifications: -05+ years of experience with Python programming. -5+ years of experience in cloud infrastructure, particularly AWS. -3+ years of experience with PySpark, including usage with EMR or Glue Notebooks. -3+ years of experience with Apache Airflow for workflow orchestration. -Solid experience with data analysis in fast-paced environments. -Strong understanding of capital markets, financial systems, or prior experience in the financial domain is a must. -Proficiency with cloud-native technologies and frameworks. -Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. -Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development. -Excellent problem-solving skills and ability to handle complex technical challenges. -Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences. -Ability to thrive in a fast-paced, dynamic environment. Benefits Benefits Standard Company Benefits ","
We are seeking a skilled ETL Data Engineer to design, build, and maintain efficient and reliable ETL pipelines, ensuring seamless data integration, transformation, and delivery to support business intelligence and analytics. The ideal candidate should have hands-on experience with ETL tools like Talend , strong database knowledge, and familiarity with AWS services . Key Responsibilities: Design, develop, and optimize ETL workflows and data pipelines using Talend or similar ETL tools. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Integrate data from various sources, including databases, APIs, and cloud platforms, into data warehouses or data lakes. Create and optimize complex SQL queries for data extraction, transformation, and loading. Manage and monitor ETL processes to ensure data integrity, accuracy, and efficiency. Work with AWS services like S3, Redshift, RDS , and Glue for data storage and processing. Implement data quality checks and ensure compliance with data governance standards. Troubleshoot and resolve data discrepancies and performance issues. Document ETL processes, workflows, and technical specifications for future reference. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 4+ years of experience in ETL development, data engineering, or data warehousing. Hands-on experience with Talend or similar ETL tools (Informatica, SSIS, etc.). Proficiency in SQL and strong understanding of database concepts (relational and non-relational). Experience working in an AWS environment with services like S3, Redshift, RDS , or Glue . Strong problem-solving skills and ability to troubleshoot data-related issues. Knowledge of scripting languages like Python or Shell scripting is a plus. Good communication skills to collaborate with cross-functional teams.
Senior Specialist Cloud Engineer - ContactCentre Innovation & GenAI RoleSummary: We areseeking an experienced and highly skilled Senior Specialist Cloud Engineer tojoin our innovative team. In this role, you will be responsible for designing,implementing, and maintaining cloud-based solutions using cutting-edgetechnologies. You will play a crucial role in optimizing our cloudinfrastructure, improving system performance, and ensuring the scalability andreliability of our applications. What youwill do: (Roles & Responsibilities) - Design and implement complexcloud-based solutions using AWS services - Design and optimize database schemasand queries, particularly with DynamoDB - Write, test, and maintain high-quality Python code for cloud-based applications - Able to work on Amazon Connect and integrate Amazon services - Collaborate with cross-functional teamsto identify and implement cloud-based solutions - Ensure security, compliance, and bestpractices in cloud infrastructure - Troubleshoot and resolve complextechnical issues in cloud environments - Mentor junior engineers and contributeto the teams technical growth - Stay up-to-date with the latest cloudtechnologies and industry trends Requirements What you need to succeed: (MUST Haves) - Bachelors degree in Computer Science,Engineering, or a related field - 5-9 years of experience in cloudengineering, with a strong focus on AWS - Extensive experience with Pythonprogramming and software development - Strong knowledge of database systems,particularly DynamoDB - Hands On experience in Amazon Connect - Excellent problem-solving and analyticalskills - Strong communication and collaborationabilities Ideal Candidate will also have: -Experience with containerization technologies (e.g., Docker, Kubernetes) -Knowledge of CI/CD pipelines and DevOps practices -Familiarity with serverless architectures and microservices -Experience with data analytics and big data technologies -Understanding of machine learning and AI concepts -Contributions to open-source projects or technical communities - AWS certifications (e.g., SolutionsArchitect, DevOps Engineer) are a plus -Experience mentoring junior engineers or leading small teams -Strong project management skills and ability to manage multiple priorities If you are passionate about cloud technologies,have a proven track record of delivering innovative solutions, and thrive in acollaborative environment, we want to hear from you. Join our team and helpshape the future of cloud computing! Benefits As per company standards. ","
Installing, configuring, and troubleshooting of all Windows and mac OS. Securing the network by installing troubleshooting Antivirus related issues regularly updating the Antivirus Configuring and Troubleshooting Local Network Printers. Resolving hardware related issues in printers and other peripherals. Providing the admin rights, Remote desktop access, file and folder access to the users as per the request. Trouble shooting all end-to-end technical problems through the remote tools. Monitoring the compliance status of all the desktops/servers in terms of patch/DATstatus. Troubleshooting issues on Office365 and escalate to proper team Identifying and solving issues on Microsoft products (EXCEL, POWEPOINT, WORD, TEAMS) Troubleshooting issues on Citrix connections, Client VPN. Add devices to AzureAD, create, deploy and managing Intune MDM. Create, deploy, manage the app protection policies, device configuration policies from Intune endpoint manager. Create and manage security firewall, Switches, and ILL. Manage and update McAfee web controls and firewall rules. Maintain and monitor CCTV camera, Access control eSSL. Identify the causes of networking problems, using diagnostic testing software and equipment. Resolve IT tickets regarding computer software, hardware, and application issues on time. Set up equipment for employee use, performing or ensuring proper installation of cables, operating systems, or appropriate software. Install and perform minor repairs to hardware, software, or peripheral equipment. Requirements Good Experience in System administration Technical Support Experience Experience in ITIL process Experience in RIM (Remote Infrastructure Mgmt.) Good knowledge in Virtualization and cloud concepts with VMware and/or Open stack Excellent communication skills
Job Title: Technical Project Manager Location: Hyderabad Employment Type: Full-time Experience: 10+ years Domain: Banking and Insurance We are seeking a Technical Project Manager to lead andcoordinate the delivery of data-centric projects. This role bridges the gapbetween engineering teams and business stakeholders, ensuring the successfulexecution of technical initiatives, particularly in data infrastructure,pipelines, analytics, and platform integration. Responsibilities: Lead end-to-end project management for data-driven initiatives, including planning, execution, delivery, and stakeholder communication. Work closely with data engineers, analysts, and software developers to ensure technical accuracy and timely delivery of projects. Translate business requirements into technical specifications and work plans. Manage project timelines, risks, resources, and dependencies using Agile, Scrum, or Kanban methodologies. Drive the development and maintenance of scalable ETL pipelines, data models, and data integration workflows. Oversee code reviews and ensure adherence to data engineering best practices. Provide hands-on support when necessary, in Python-based development or debugging. Collaborate with cross-functional teams including Product, Data Science, DevOps, and QA. Track project metrics and prepare progress reports for stakeholders. Requirements Required Qualifications: Bacheloror masterdegree in computer science, Information Systems, Engineering, or related field. 10+ years of experience in project management or technical leadership roles. Strong understanding of modern data architectures (e.g., data lakes, warehousing, streaming). Experience working with cloud platforms like AWS, GCP, or Azure. Familiarity with tools such as JIRA, Confluence, Git, and CI/CD pipelines. Strong communication and stakeholder management skills. Benefits Company standard benefits. ","
":" Job Title: Java Developer Location: Hyderabad Employment Type: Full-time Experience: 4+ years Domain: Banking and Insurance Key Responsibilities: Design, develop, and maintain scalable Java applications using Spring Boot framework. Build and deploy microservices-based architectures to support modular and efficient software solutions. Develop and optimize database interactions using Hibernate ORM. Collaborate with cross-functional teams including QA, DevOps, and Product Management to deliver end-to-end solutions. Write clean, reusable, and well-documented code following coding standards and best practices. Participate in code reviews, unit testing, and integration testing. Troubleshoot and resolve technical issues in a timely manner. Contribute to continuous improvement by suggesting and implementing new technologies or processes. Support deployments and basic cloud-related operations, working closely with cloud engineers or DevOps teams. Requirements Strong proficiency in Java programming language. Hands-on experience with Spring Boot framework and microservices architecture. Solid knowledge of Hibernate or other ORM frameworks. Understanding of RESTful API development and integration. Basic knowledge of cloud platforms (AWS, Azure, or GCP) and cloud-native application concepts. Experience with relational databases (MySQL, PostgreSQL, Oracle, etc.). Familiarity with version control systems such as Git. Good understanding of software development lifecycle (SDLC) and Agile methodologies. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Benefits Company standard benefits. ","
We areseeking an experienced Data Solution Architect to lead the design andimplementation of scalable, secure, and high-performing data solutions acrosscloud and hybrid environments. The ideal candidate will bring deep expertise in Data Engineering, APIs, Python, Spark/PySpark , and enterprise cloudplatforms such as AWS and Azure . This is a strategic,client-facing role that involves working closely with stakeholders, engineeringteams, and business leaders to architect and deliver robust data platforms. KeyResponsibilities: Architect end-to-end data solutions across cloud (AWS/Azure) and on-premises environments Develop and integrate RESTful APIs for data ingestion, transformation, and distribution Define data architecture standards, best practices, and governance frameworks Work with DevOps and cloud teams to deploy solutions using CI/CD and infrastructure-as-code Guide and mentor data engineering teams in solution implementation and performance optimization Ensure high availability, scalability, and data security compliance across platforms Collaborate with product owners and stakeholders to translate business needs into technical specifications Conduct architecture reviews, risk assessments, and solution validation Requirements RequiredSkills & Experience: 15 to 22 years of total experience in IT, with at least 5+ years in data architecture roles Strong experience in data processing frameworks and building the ETL solutions Proven expertise in designing and deploying solutions on AWS and Azure cloud platforms Hands-on experience with data integration, real-time streaming , and API-based data access Proficient in data modeling (structured, semi-structured, unstructured data) Deep understanding of data lakes, data warehouses, and modern data mesh/architecture patterns Experience with tools such as Airflow, Glue, Data bricks, Synapse, Redshift, or similar Knowledge of security, compliance, and governance practices in large-scale data platforms Strong communication, leadership, and client-facing skills Benefits Standard Company Benefits ","
":" Were interested in hearing from people who Have solid hands-on experience of data engineering/ETL principles and practices with Unix/Linux, Ab Initio and other data integration tools and frameworks. Are confident demonstrating coding, design, debugging and problem-solving skills Pride themselves in the ability to mentor and provide technical assistance to team members. Are knowledgeable to apply and promote industry best patterns and practices. Have solid experience in developing and deploying high quality software solutions with comprehensive test coverage without supervision. Are comfortable with estimating development effort for new features Have the ability to lead Level 3 or above support and technical troubleshooting activity. Requirements Tech Skills We use a broad range of tools, languages, and frameworks. We dont expect you to know them all but experience or exposure with some of these (or equivalents) will set you up for success in this team. Hands-on programming experience of 8-12 years in Ab Initio related products and shell scripting. Background with both Batch and continuous flows is highly regarded. Experience with RDBMS (Oracle) and using SQL or other data integration/ETL tools. AWS Cloud Integration knowledge of MQs, REST APIs and Kafka DevOps and production support. Benefits As per company standards. ","
":" Job Title: PySpark Data Engineer Experience: 5 8 Years Location: Hyderabad Employment Type: Full-Time Job Summary: We are looking for a skilled and experienced PySpark Data Engineer to join our growing data engineering team. The ideal candidate will have 58 years of experience in designing and implementing data pipelines using PySpark , AWS Glue , and Apache Airflow , with strong proficiency in SQL . You will be responsible for building scalable data processing solutions, optimizing data workflows, and collaborating with cross-functional teams to deliver high-quality data assets. Key Responsibilities: Design, develop, and maintain large-scale ETL pipelines using PySpark and AWS Glue . Orchestrate and schedule data workflows using Apache Airflow . Optimize data processing jobs for performance and cost-efficiency. Work with large datasets from various sources, ensuring data quality and consistency. Collaborate with Data Scientists, Analysts, and other Engineers to understand data requirements and deliver solutions. Write efficient, reusable, and well-documented code following best practices. Monitor data pipeline health and performance; resolve data-related issues proactively. Participate in code reviews, architecture discussions, and performance tuning. Requirements 58 years of experience in data engineering roles. Strong expertise in PySpark for distributed data processing. Hands-on experience with AWS Glue and other AWS data services (S3, Athena, Lambda, etc.). Experience with Apache Airflow for workflow orchestration. Strong proficiency in SQL for data extraction, transformation, and analysis. Familiarity with data modeling concepts and data lake/data warehouse architectures. Experience with version control systems (e.g., Git) and CI/CD processes. Ability to write clean, scalable, and production-grade code. Benefits Company standard benefits. ","
We are seeking a highly experienced and hands-on Lead/ Senior Data Engineer to architect, develop, and optimize data solutions in a cloud-native environment. The ideal candidate will have 712 years of strong technical expertise in AWS Glue, PySpark, and Python , along with experience designing robust data pipelines and frameworks for large-scale enterprise systems. Prior exposure to the financial domain or regulated environments is a strong advantage. Key Responsibilities: Solution Architecture : Design scalable and secure data pipelines using AWS Glue, PySpark, and related AWS services (EMR, S3, Lambda, etc.) Leadership & Mentorship : Guide junior engineers, conduct code reviews, and enforce best practices in development and deployment. ETL Development : Lead the design and implementation of end-to-end ETL processes for structured and semi-structured data. Framework Building : Develop and evolve data frameworks, reusable components, and automation tools to improve engineering productivity. Performance Optimization : Optimize large-scale data workflows for performance, cost, and reliability. Data Governance : Implement data quality, lineage, and governance strategies in compliance with enterprise standards. Collaboration : Work closely with product, analytics, compliance, and DevOps teams to deliver high-quality solutions aligned with business goals. CI/CD Automation : Set up and manage continuous integration and deployment pipelines using AWS CodePipeline, Jenkins, or GitLab. Documentation & Presentations : Prepare technical documentation and present architectural solutions to stakeholders across levels. Requirements Required Qualifications: 7-12 years of experience in data engineering or related fields. Strong expertise in Python programming with a focus on data processing. Extensive experience with AWS Glue (both Glue Jobs and Glue Studio/Notebooks). Deep hands-on experience with PySpark for distributed data processing. Solid AWS knowledge : EMR, S3, Lambda, IAM, Athena, CloudWatch, Redshift, etc. Proven experience in architecture and managing complex ETL workflows . Proficiency with Apache Airflow or similar orchestration tools. Hands-on experience with CI/CD pipelines and DevOps best practices. Familiarity with data quality , data lineage , and metadata management . Strong experience working in agile/scrum teams. Excellent communication and stakeholder engagement skills. Preferred/Good to Have: Experience in financial services, capital markets, or compliance systems . Knowledge of data modeling , data lakes , and data warehouse architecture . Familiarity with SQL (Athena/Presto/Redshift Spectrum). Exposure to ML pipeline integration or event-driven architecture is a plus. Benefits Flexible work culture and remote options Opportunity to lead cutting-edge cloud data engineering projects Skill-building in large-scale, regulated environments. ","
Domain: Banking and Insurance Key Responsibilities: Design, develop, and maintain scalable Java applications using Spring Boot framework. Build and deploy microservices-based architectures to support modular and efficient software solutions. Develop and optimize database interactions using Hibernate ORM. Collaborate with cross-functional teams including QA, DevOps, and Product Management to deliver end-to-end solutions. Write clean, reusable, and well-documented code following coding standards and best practices. Participate in code reviews, unit testing, and integration testing. Troubleshoot and resolve technical issues in a timely manner. Contribute to continuous improvement by suggesting and implementing new technologies or processes. Support deployments and basic cloud-related operations, working closely with cloud engineers or DevOps teams. Required Skills and Qualifications: Strong proficiency in Java programming language. Hands-on experience with Spring Boot framework and microservices architecture. Solid knowledge of Hibernate or other ORM frameworks. Understanding of RESTful API development and integration. Basic knowledge of cloud platforms (AWS, Azure, or GCP) and cloud-native application concepts. Experience with relational databases (MySQL, PostgreSQL, Oracle, etc.). Familiarity with version control systems such as Git. Good understanding of software development lifecycle (SDLC) and Agile methodologies. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities.
Key Responsibilities: Engage directly with clients to understand requirements, provide solution design, and drive successful project delivery. Lead cloud migration initiatives, specifically moving on-premise applications and databases to AWS cloud platforms. Design, develop, and maintain scalable, reliable, and secure data applications in a cloud environment. Lead and mentor a team of engineers; oversee task distribution, progress tracking, and issue resolution. Develop, optimize, and troubleshoot complex SQL queries and stored procedures. Design and implement robust ETL pipelines using tools such as Talend, Informatica, or DataStage. Ensure optimal usage and performance of Amazon Redshift and implement performance tuning strategies. Collaborate across teams to implement best practices in cloud architecture and data management. Required Skills and Qualifications: Strong hands-on experience with the AWS ecosystem, including services related to storage, compute, and data analytics. In-depth knowledge of data warehouse architecture and best practices. Proven experience in on-prem to cloud migration projects. Expertise in at least one ETL tool: Talend, Informatica, or DataStage. Strong command of SQL and Stored Procedures. Practical knowledge and usage of Amazon Redshift. Demonstrated experience in leading teams and managing project deliverables. Strong understanding of performance tuning for data pipelines and databases.
We are seeking a Lead/Senior Data Engineer with 7-12 years of experience to architect, develop, and optimize data solutions in a cloud-native environment. The role requires strong expertise in AWS Glue, PySpark, and Python with a proven ability to design scalable data pipelines and frameworks for large-scale enterprise systems. Prior exposure to financial services or regulated environments is a strong advantage. Key Responsibilities Design and implement secure, scalable pipelines using AWS Glue, PySpark, EMR, S3, Lambda, and other AWS services. Lead ETL development for structured and semi-structured data, ensuring high performance and reliability. Build reusable frameworks, automation tools, and CI/CD pipelines with AWS CodePipeline, Jenkins, or GitLab. Mentor junior engineers, conduct code reviews, and enforce best practices. Implement data governance practices including quality, lineage, and compliance standards. Collaborate with product, analytics, compliance, and DevOps teams to align technical solutions with business goals. Optimize workflows for cost efficiency, scalability, and speed. Prepare technical documentation and present architectural solutions to stakeholders. Requirements Strong hands-on experience with AWS Glue, PySpark, Python, and AWS services (EMR, S3, Lambda, Redshift, Athena). Proficiency in ETL workflows, Airflow (or equivalent), and DevOps practices. Solid knowledge of data governance, lineage, and agile methodologies. Excellent communication and stakeholder engagement skills. Financial services or regulated environment background preferred.
Job Description: We are looking for a skilled Angular Developer to join our front-end development team. In this role, you will be responsible for implementing visual elements that users see and interact with in our web applications, which makes your role crucial for ensuring the success of our business. Responsibilities: Develop user interfaces for modern web applications using Angular. Build reusable components and front-end libraries for future use. Ensure the technical feasibility of UI/UX designs. Optimize application for maximum speed and scalability. Collaborate with other team members and stakeholders. Requirements: Proficient in Angular (preferably Angular 7+). Experience with HTML5, CSS3, JavaScript/TypeScript. Knowledge of state management (e.g., NgRx, RxJS) and RESTful APIs. Familiarity with testing frameworks (e.g., Jasmine) and test runners (e.g., Karma). Understanding of Agile methodologies. Excellent problem-solving and communication skills. B.E/B.Tech in Computer Science or relevant field
DATAECONOMY is one of the fastest-growing Data & Analytics company with global presence. We are well-differentiated and are known for our Thought leadership, out-of-the-box products, cutting-edge solutions, accelerators, innovative use cases, and cost-effective service offerings. We offer products and solutions in Cloud, Data Engineering, Data Governance, AI/ML, DevOps and Blockchain to large corporates across the globe. Strategic Partners with AWS, Collibra, cloudera, neo4j, DataRobot, Global IDs, tableau, MuleSoft and Talend. Role: Senior PLSQL Developer Qualification: BTech/MTech/MCA Job Location: Hyderabad Experience: 8-12 Years Work Mode: Hybrid Key Responsibilities Expected to contribute to design and development with limited support. Responsible for developing new features, debugging/troubleshooting and fixing operational incidents. Adhering to Engineering practices and the guidelines established. Unit test and automate the developed code before opening it to QA. Taking part of all scrum ceremonies. Mentoring Junior Team members. Requirements 8-12 years of experience in object-oriented software design and development. Must Haves: Strong experience with Oracle functions, procedures, triggers, packages & performance tuning Strong knowledge on Oracle and SQL., working experience on PL/SQL. Strong problem-solving and analytical skills. Ability to multi-task and stay organized in a dynamic work environment. Good debugging skills Ability to learn and adapt quickly. Familiar with Agile/Scrum methodologies.
DATAECONOMY is one of the fastest-growing Data & Analytics company with global presence. We are well-differentiated and are known for our Thought leadership, out-of-the-box products, cutting-edge solutions, accelerators, innovative use cases, and cost-effective service offerings. We offer products and solutions in Cloud, Data Engineering, Data Governance, AI/ML, DevOps and Blockchain to large corporates across the globe. Strategic Partners with AWS, Collibra, cloudera, neo4j, DataRobot, Global IDs, tableau, MuleSoft and Talend. Job Overview: We are seeking a dynamic and experienced Lead Software Test Engineer with a strong background in Selenium and API Automation Testing using Java. As a key member of our testing team, you will be responsible for leading and executing test strategies, mentoring team members, and ensuring the delivery of high-quality software products. The ideal candidate should have in-depth knowledge of automation testing, excellent leadership skills, and a passion for driving excellence in testing practices. Key Responsibilities: Define and implement test strategies, methodologies, and best practices for Selenium and API automation testing by developing an effective automation framework. Design, develop, and maintain robust and scalable automation frameworks using Selenium WebDriver and Java. Create and execute automated test scripts for web applications, ensuring comprehensive test coverage. Develop and implement automated tests for APIs and microservices using tools such as Rest Assured or similar. Verify data integrity, security, and performance of APIs through systematic testing. Collaborate with cross-functional teams to develop test plans, test cases, and test scenarios. Execute test cases and ensure the timely identification and resolution of defects. Integrate automated tests into CI/CD pipelines to support continuous testing and deployment. Implement and optimize automated regression testing to maintain software stability. Work closely with development teams, product managers, and other stakeholders to ensure alignment with project goals and requirements. Provide timely and accurate testing status reports to project stakeholders. Champion and enforce quality assurance processes and standards throughout the software development lifecycle. Conduct code reviews and ensure the adoption of best coding practices within the testing team. Lead and mentor a team of software test engineers, providing technical guidance and support. Requirements Bachelors degree in computer science, Information Technology, or a related field. Proven experience in leading Selenium and API automation testing efforts. Expert in understanding the requirements and developing Automation frame work from scratch Strong programming skills in Java and hands-on experience with testing frameworks such as TestNG or JUnit. Extensive experience in designing and implementing automation frameworks for web applications. Solid understanding of API testing principles and tools. Experience with version control systems (e.g., Git) and build tools (e.g., Maven, Gradle). Familiarity with CI/CD tools (e.g., Jenkins, Bamboo). Excellent leadership, communication, and interpersonal skills. Ability to drive innovation and continuous improvement within the testing team. Mode Of work : Hybrid Notice Period : 0-15 days Location: Hyderabad/Pune Experience : 9-14 Years
We are seeking a highly skilled and experienced Senior Data Engineer to lead the end-to-end development of complex models for compliance and supervision. The ideal candidate will have deep expertise in cloud-based infrastructure, ETL pipeline development, and financial domains, with a strong focus on creating robust, scalable, and efficient solutions. Key Responsibilities : Model Development: Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. Cloud Infrastructure: Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. ETL Pipeline Development : Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing. CI/CD Implementation : Build and maintain CI/CD pipelines for deploying and maintaining cloud-based applications. Data Analysis : Perform detailed data analysis and deliver actionable insights to stakeholders. Collaboration : Work closely with cross-functional teams to understand requirements, present solutions, and ensure alignment with business goals. Agile Methodology : Operate effectively in agile or hybrid agile environments, delivering high-quality results within tight deadlines. Framework Development : Enhance and expand existing frameworks and capabilities to support evolving business needs. Documentation and Communication : Create clear documentation and present technical solutions to both technical and non-technical audiences. Requirements 3+ years of experience with Python programming. 3 years of experience in cloud infrastructure, particularly AWS. 3+ years of experience with PySpark, including usage with EMR or Glue Notebooks. 2+ years of experience with Apache Airflow for workflow orchestration. Solid experience with data analysis in fast-paced environments. Proficiency with cloud-native technologies and frameworks. Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development. Excellent problem-solving skills and ability to handle complex technical challenges. Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences.
FIND ON MAP