Hyderabad, Telangana, India
Not disclosed
Hybrid
Full Time
We are seeking a highly skilled and experienced Senior Data Engineer to lead the end-to-end development of complex models for compliance and supervision. The ideal candidate will have deep expertise in cloud-based infrastructure, ETL pipeline development, and financial domains, with a strong focus on creating robust, scalable, and efficient solutions. Key Responsibilities: • Model Development: Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. • Cloud Infrastructure: Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. • ETL Pipeline Development: Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing. • CI/CD Implementation: Build and maintain CI/CD pipelines for deploying and maintaining cloud-based applications. • Data Analysis: Perform detailed data analysis and deliver actionable insights to stakeholders. • Collaboration: Work closely with cross-functional teams to understand requirements, present solutions, and ensure alignment with business goals. • Agile Methodology: Operate effectively in agile or hybrid agile environments, delivering high-quality results within tight deadlines. • Framework Development: Enhance and expand existing frameworks and capabilities to support evolving business needs. • Documentation and Communication: Create clear documentation and present technical solutions to both technical and non-technical audiences. Requirements Required Qualifications: • 05+ years of experience with Python programming. • 5+ years of experience in cloud infrastructure, particularly AWS. • 3+ years of experience with PySpark, including usage with EMR or Glue Notebooks. • 3+ years of experience with Apache Airflow for workflow orchestration. • Solid experience with data analysis in fast-paced environments. • Strong understanding of capital markets, financial systems, or prior experience in the financial domain is a must. • Proficiency with cloud-native technologies and frameworks. • Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. • Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development. • Excellent problem-solving skills and ability to handle complex technical challenges. • Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences. • Ability to thrive in a fast-paced, dynamic environment. Benefits Standard Company Benefits
Hyderabad, Telangana, India
Not disclosed
Hybrid
Full Time
DATAECONOMY is one of the fastest-growing Data & AI company with global presence. We are well-differentiated and are known for our Thought leadership, out-of-the-box products, cutting-edge solutions, accelerators, innovative use cases, and cost-effective service offerings. We offer products and solutions in Cloud, Data Engineering, Data Governance, AI/ML, DevOps and Blockchain to large corporates across the globe. Strategic Partners with AWS, Collibra, cloudera, neo4j, DataRobot, Global IDs, tableau, MuleSoft and Talend. Job Title: Delivery HeadExperience: 18 - 22 YearsLocation: HyderabadNotice Period: Immediate Joiners are preferred Job Summary:We are seeking a seasoned Technical Delivery Manager with deep expertise in Data Engineering and Data Science to lead complex data initiatives and drive successful delivery across cross-functional teams. The ideal candidate brings a blend of strategic thinking, technical leadership, and project execution skills, along with hands-on knowledge of modern data platforms, machine learning, and analytics frameworks. Key Responsibilities:Program & Delivery ManagementOversee end-to-end delivery of large-scale data programs, ensuring alignment with business goals, timelines, and quality standards.Manage cross-functional project teams including data engineers, data scientists, analysts, and DevOps personnel.Ensure agile delivery through structured sprint planning, backlog grooming, and iterative delivery.Technical LeadershipProvide architectural guidance and review of data engineering pipelines and machine learning models.Evaluate and recommend modern data platforms (e.g., Snowflake, Databricks, Azure Data Services, AWS Redshift, GCP BigQuery).Ensure best practices in data governance, quality, and compliance (e.g., GDPR, HIPAA).Stakeholder & Client ManagementAct as the primary point of contact for technical discussions with clients, business stakeholders, and executive leadership.Translate complex data requirements into actionable project plans.Present technical roadmaps and delivery status to stakeholders and C-level executives.Team Development & MentoringLead, mentor, and grow a high-performing team of data professionals.Conduct code and design reviews; promote innovation and continuous improvement. Key Skills and Qualifications:Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field.18–22 years of total IT experience with at least 8–10 years in data engineering, analytics, or data science.Proven experience delivering enterprise-scale data platforms, including:ETL/ELT pipelines using tools like Apache Spark, Airflow, Kafka, Talend, or Informatica.Data warehouse and lake architectures (e.g., Snowflake, Azure Synapse, AWS Redshift, Delta Lake).Machine Learning lifecycle management (e.g., model training, deployment, MLOps using MLflow, SageMaker, or Vertex AI).Strong knowledge of cloud platforms (Azure, AWS, or GCP).Deep understanding of Agile, Scrum, and DevOps principles.Excellent problem-solving, communication, and leadership skills. Preferred Certifications (Optional but Beneficial):PMP, SAFe Agile, or similar project management certifications.Certifications in cloud platforms (e.g., AWS Certified Data Analytics, Azure Data Engineer Associate).Certified Scrum Master (CSM) or equivalent.
Gurgaon, Haryana, India
Not disclosed
On-site
Full Time
Senior Specialist Cloud Engineer - Contact Centre Innovation & GenAI Role Summary: We are seeking an experienced and highly skilled Senior Specialist Cloud Engineer to join our innovative team. In this role, you will be responsible for designing, implementing, and maintaining cloud-based solutions using cutting-edge technologies. You will play a crucial role in optimizing our cloud infrastructure, improving system performance, and ensuring the scalability and reliability of our applications. What you will do: (Roles & Responsibilities) • Design and implement complex cloud-based solutions using AWS services • Design and optimize database schemas and queries, particularly with DynamoDB • Write, test, and maintain high-quality Python code for cloud-based applications • Able to work on Amazon Connect and integrate Amazon services • Collaborate with cross-functional teams to identify and implement cloud-based solutions • Ensure security, compliance, and best practices in cloud infrastructure • Troubleshoot and resolve complex technical issues in cloud environments • Mentor junior engineers and contribute to the team's technical growth • Stay up-to-date with the latest cloud technologies and industry trends Requirements What you need to succeed: (MUST Haves) • Bachelor's degree in Computer Science, Engineering, or a related field • 5-9 years of experience in cloud engineering, with a strong focus on AWS • Extensive experience with Python programming and software development • Strong knowledge of database systems, particularly DynamoDB • Hands On experience in Amazon Connect • Excellent problem-solving and analytical skills • Strong communication and collaboration abilities Ideal Candidate will also have: • Experience with containerization technologies (e.g., Docker, Kubernetes) • Knowledge of CI/CD pipelines and DevOps practices • Familiarity with serverless architectures and microservices • Experience with data analytics and big data technologies • Understanding of machine learning and AI concepts • Contributions to open-source projects or technical communities • AWS certifications (e.g., Solutions Architect, DevOps Engineer) are a plus • Experience mentoring junior engineers or leading small teams • Strong project management skills and ability to manage multiple priorities If you are passionate about cloud technologies, have a proven track record of delivering innovative solutions, and thrive in a collaborative environment, we want to hear from you. Join our team and help shape the future of cloud computing! Benefits As per company standards. Show more Show less
Hyderabad, Telangana, India
Not disclosed
Remote
Full Time
Primary Responsibilities Installing, configuring, and troubleshooting of all Windows and mac OS. Securing the network by installing & troubleshooting Antivirus related issues & regularly updating the Antivirus Configuring and Troubleshooting Local & Network Printers. Resolving hardware related issues in printers and other peripherals. Providing the admin rights, Remote desktop access, file and folder access to the users as per the request. Trouble shooting all end-to-end technical problems through the remote tools. Monitoring the compliance status of all the desktops/servers in terms of patch/DAT’s status. Troubleshooting issues on Office365 and escalate to proper team Identifying and solving issues on Microsoft products (EXCEL, POWEPOINT, WORD, TEAMS) Troubleshooting issues on Citrix connections, Client VPN’s. Add devices to AzureAD, create, deploy and managing Intune MDM. Create, deploy, manage the app protection policies, device configuration policies from Intune endpoint manager. Create and manage security firewall, Switches, and ILL. Manage and update McAfee web controls and firewall rules. Maintain and monitor CCTV camera’s, Access control eSSL. Identify the causes of networking problems, using diagnostic testing software and equipment. Resolve IT tickets regarding computer software, hardware, and application issues on time. Set up equipment for employee use, performing or ensuring proper installation of cables, operating systems, or appropriate software. Install and perform minor repairs to hardware, software, or peripheral equipment. Requirements Requirements Good Experience in System administration Technical Support Experience Experience in ITIL process Experience in RIM (Remote Infrastructure Mgmt.) Good knowledge in Virtualization and cloud concepts with VMware and/or Open stack Excellent communication skills Benefits Benefits Standard Company Benefits Show more Show less
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Description: We are looking for a highly skilled Senior Data Scientist with 3–9 years of experience specializing in Python, Large Language Models (LLMs), NLP, Machine Learning, and Generative AI . The ideal candidate will have a deep understanding of building intelligent systems using modern AI frameworks and deploying them into scalable, production-grade environments. You will work closely with cross-functional teams to build innovative AI solutions that deliver real business value. Responsibilities Design, develop, and deploy ML/NLP solutions using Python and state-of-the-art AI frameworks. Apply LLMs and Generative AI techniques to solve real-world problems. Build, train, fine-tune, and evaluate models for NLP and GenAI tasks. Collaborate with data engineers, MLOps, and product teams to operationalize models. Contribute to the development of scalable AI services and applications. Analyze large datasets to extract insights and support model development. Maintain clean, modular, and version-controlled code using Git. Requirements Must-Have Skills: 3–10 years of hands-on experience with Python for data science and ML applications. Strong expertise in Machine Learning algorithms and model development. Proficient in Natural Language Processing (NLP) and text analytics. Experience with Large Language Models (LLMs) and Generative AI frameworks (e.g., LangChain, Hugging Face Transformers). Familiarity with model deployment and real-world application integration. Experience with version control systems like Git. Good To Have Experience with PySpark for distributed data processing. Exposure to MLOps practices and model lifecycle management. Familiarity with cloud platforms such as AWS, GCP, or Azure. Knowledge of vector databases (e.g., FAISS, Pinecone) and embeddings. Educational Qualification Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Benefits Work with cutting-edge technologies in a collaborative and forward-thinking environment. Opportunities for continuous learning, skill development, and career growth. Exposure to high-impact projects in AI and data science. Show more Show less
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description We are seeking a highly skilled and experienced Senior Data Engineer to lead the end-to-end development of complex models for compliance and supervision. The ideal candidate will have deep expertise in cloud-based infrastructure, ETL pipeline development, and financial domains, with a strong focus on creating robust, scalable, and efficient solutions. Key Responsibilities Model Development: Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. Cloud Infrastructure: Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. ETL Pipeline Development: Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing. CI/CD Implementation: Build and maintain CI/CD pipelines for deploying and maintaining cloud-based applications. Data Analysis: Perform detailed data analysis and deliver actionable insights to stakeholders. Collaboration: Work closely with cross-functional teams to understand requirements, present solutions, and ensure alignment with business goals. Agile Methodology: Operate effectively in agile or hybrid agile environments, delivering high-quality results within tight deadlines. Framework Development: Enhance and expand existing frameworks and capabilities to support evolving business needs. Documentation and Communication: Create clear documentation and present technical solutions to both technical and non-technical audiences. Requirements Required Qualifications: 05+ years of experience with Python programming. 5+ years of experience in cloud infrastructure, particularly AWS. 3+ years of experience with PySpark, including usage with EMR or Glue Notebooks. 3+ years of experience with Apache Airflow for workflow orchestration. Solid experience with data analysis in fast-paced environments. Strong understanding of capital markets, financial systems, or prior experience in the financial domain is a must. Proficiency with cloud-native technologies and frameworks. Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development. Excellent problem-solving skills and ability to handle complex technical challenges. Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences. Ability to thrive in a fast-paced, dynamic environment. Benefits Standard Company Benefits Show more Show less
Pune, Maharashtra, India
Not disclosed
On-site
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Summary We are seeking a skilled and detail-oriented Business Analyst to join our team. The Business Analyst will be responsible for analyzing business processes, identifying areas for improvement, and developing strategies to enhance efficiency and productivity. The ideal candidate will possess strong analytical skills, excellent communication abilities, and a deep understanding of business operations and IT systems. Key Responsibilities Collaborate with stakeholders to understand their needs and gather detailed business requirements. Analyze data to identify trends, patterns, and insights that inform business decisions. Develop and document business process models to illustrate current and future states. Propose and design technical and process solutions that meet business needs and objectives. Work with IT and other departments to implement solutions and ensure they align with business goals. Communicate findings, recommendations, and project updates to stakeholders and executives. Create detailed documentation of business requirements, processes, and solutions. Participate in testing and validating new systems and processes to meet business requirements. Identify opportunities for process improvements and contribute to ongoing optimization efforts. Requirements Skills Strong analytical and problem-solving skills. Proficiency in data analysis tools and techniques. Excellent communication and interpersonal skills. Ability to work collaboratively with cross-functional teams. Experience with business process modeling and documentation tools. Knowledge of project management methodologies and tools. Benefits As per company standards. Show more Show less
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Overview: We seek a highly skilled Java Full Stack Developer who is comfortable with frontend and backend development. The ideal candidate will be responsible for developing and designing frontend web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. The Java Full Stack Developer will be required to see out a project from conception to final product, requiring good organizational skills and attention to detail. Key Responsibilities Frontend Development: Design and develop user-facing web applications using modern frontend languages like HTML, CSS, and JavaScript and frameworks like React.js, Angular, or Vue.js. Backend Development: Build and maintain server-side application logic using languages such as Node.js, Python, Ruby, Java, or PHP, and manage database interactions with MySQL, PostgreSQL, MongoDB, or other database systems. API Development and Integration: Develop and integrate RESTful APIs to connect frontend and backend components, ensuring smooth data flow and communication between different parts of the application. Database Management: Design, implement, and manage databases, ensuring data integrity, security, and optimal performance. Version Control and Collaboration: Use Git and other version control systems to track code changes and collaborate with other team developers. Deployment and DevOps: Automate deployment processes, manage cloud infrastructure, and ensure the scalability and reliability of applications through CI/CD pipelines. Security Implementation: Implement security best practices to protect the application from vulnerabilities, including authentication, authorization, and data encryption. Cross-Platform Optimization: Ensure the application is responsive and optimized for different devices, platforms, and browsers. Troubleshooting and Debugging: Identify, diagnose, and fix bugs and performance issues in the application, ensuring a smooth user experience. Collaboration and Communication: Work closely with product managers, designers, and other stakeholders to understand requirements and deliver solutions that meet business needs. Continuous Learning: Stay updated with the latest technologies, frameworks, and industry trends to improve development practices continuously. Requirements Technical Skills: Proficiency in frontend technologies like HTML, CSS, JavaScript, and frameworks like React.js, Angular, or Vue.js. Strong backend development experience with Node.js, Python, Java, or similar languages. Hands-on experience with databases like MySQL, PostgreSQL, MongoDB, or similar. Familiarity with version control systems, notably Git. Experience with cloud services like AWS, Azure, or Google Cloud. Knowledge of CI/CD pipelines and DevOps practices. Understanding of security principles and how to apply them to web applications. Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work collaboratively in a team environment. Ability to manage multiple tasks and projects simultaneously. Eagerness to learn new technologies and improve existing skills. Benefits As per company standards. Show more Show less
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
Job Title: Java Developer Location: Hyderabad Employment Type: Full-time Experience: 4+ years Domain: Banking and Insurance Key Responsibilities: Design, develop, and maintain scalable Java applications using Spring Boot framework. Build and deploy microservices-based architectures to support modular and efficient software solutions. Develop and optimize database interactions using Hibernate ORM. Collaborate with cross-functional teams including QA, DevOps, and Product Management to deliver end-to-end solutions. Write clean, reusable, and well-documented code following coding standards and best practices. Participate in code reviews, unit testing, and integration testing. Troubleshoot and resolve technical issues in a timely manner. Contribute to continuous improvement by suggesting and implementing new technologies or processes. Support deployments and basic cloud-related operations, working closely with cloud engineers or DevOps teams. Requirements Strong proficiency in Java programming language. Hands-on experience with Spring Boot framework and microservices architecture. Solid knowledge of Hibernate or other ORM frameworks. Understanding of RESTful API development and integration. Basic knowledge of cloud platforms (AWS, Azure, or GCP) and cloud-native application concepts. Experience with relational databases (MySQL, PostgreSQL, Oracle, etc.). Familiarity with version control systems such as Git. Good understanding of software development lifecycle (SDLC) and Agile methodologies. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Benefits Company standard benefits. Show more Show less
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
Job Title: Technical Project Manager Location: Hyderabad Employment Type: Full-time Experience: 10+ years Domain: Banking and Insurance We are seeking a Technical Project Manager to lead and coordinate the delivery of data-centric projects. This role bridges the gap between engineering teams and business stakeholders, ensuring the successful execution of technical initiatives, particularly in data infrastructure, pipelines, analytics, and platform integration. Responsibilities: Lead end-to-end project management for data-driven initiatives, including planning, execution, delivery, and stakeholder communication. Work closely with data engineers, analysts, and software developers to ensure technical accuracy and timely delivery of projects. Translate business requirements into technical specifications and work plans. Manage project timelines, risks, resources, and dependencies using Agile, Scrum, or Kanban methodologies. Drive the development and maintenance of scalable ETL pipelines, data models, and data integration workflows. Oversee code reviews and ensure adherence to data engineering best practices. Provide hands-on support when necessary, in Python-based development or debugging. Collaborate with cross-functional teams including Product, Data Science, DevOps, and QA. Track project metrics and prepare progress reports for stakeholders. Requirements Required Qualifications: Bachelor’s or master’s degree in computer science, Information Systems, Engineering, or related field. 10+ years of experience in project management or technical leadership roles. Strong understanding of modern data architectures (e.g., data lakes, warehousing, streaming). Experience working with cloud platforms like AWS, GCP, or Azure. Familiarity with tools such as JIRA, Confluence, Git, and CI/CD pipelines. Strong communication and stakeholder management skills. Benefits Company standard benefits. Show more Show less
Hyderabad, Telangana, India
None Not disclosed
On-site
Full Time
Job Description We are seeking a highly skilled and experienced Senior Oracle Database Administrator (DBA) to join our dynamic team. The ideal candidate will have over 10 years of experience in Oracle database administration, with specialized expertise in Oracle Exadata and migrating Oracle databases to Oracle Cloud Infrastructure (OCI). As a Senior Oracle DBA, you will play a critical role in ensuring the performance, availability, and security of our Oracle databases while leading complex migration projects to the OCI cloud. Key Responsibilities: Manage, maintain, and optimize Oracle databases across all environments. Administer, monitor, and tune Oracle Exadata systems for optimal performance. Lead and execute the migration of on-premises Oracle databases to OCI. Implement and manage backup and recovery strategies, including RMAN and Data Guard. Conduct performance tuning of Oracle databases and Exadata systems. Ensure database security through best practices, including hardening and user access controls. Plan and execute database patching and upgrades. Develop and implement automation scripts for routine database tasks. Maintain detailed documentation of database configurations and processes. Collaborate with development and infrastructure teams to support business-critical applications. Requirements Requirements 15+ years of hands-on experience in Oracle database administration. Strong knowledge of Oracle Exadata architecture and administration. Proven experience in migrating Oracle databases to OCI. Proficiency in Oracle Database versions 11g, 12c, 19c, and later. Expertise in Oracle RAC, ASM, Data Guard, RMAN, and OEM. Advanced performance tuning and optimization skills. Familiarity with Linux/Unix operating systems and shell scripting. Experience with PL/SQL development and debugging. Oracle Certified Professional (OCP) or Oracle Certified Master (OCM) is highly desirable. Excellent problem-solving, communication, and organizational skills. Bachelor’s degree in computer science, Information Technology, or a related field. Benefits Benefits Company standard benefits.
Hyderabad, Telangana, India
None Not disclosed
On-site
Full Time
Job Title: PySpark Data Engineer Experience: 5 – 8 Years Location: Hyderabad Employment Type: Full-Time Job Summary: We are looking for a skilled and experienced PySpark Data Engineer to join our growing data engineering team. The ideal candidate will have 5–8 years of experience in designing and implementing data pipelines using PySpark , AWS Glue , and Apache Airflow , with strong proficiency in SQL . You will be responsible for building scalable data processing solutions, optimizing data workflows, and collaborating with cross-functional teams to deliver high-quality data assets. Key Responsibilities: · Design, develop, and maintain large-scale ETL pipelines using PySpark and AWS Glue . · Orchestrate and schedule data workflows using Apache Airflow . · Optimize data processing jobs for performance and cost-efficiency. · Work with large datasets from various sources, ensuring data quality and consistency. · Collaborate with Data Scientists, Analysts, and other Engineers to understand data requirements and deliver solutions. · Write efficient, reusable, and well-documented code following best practices. · Monitor data pipeline health and performance; resolve data-related issues proactively. · Participate in code reviews, architecture discussions, and performance tuning. Requirements · 5–8 years of experience in data engineering roles. · Strong expertise in PySpark for distributed data processing. · Hands-on experience with AWS Glue and other AWS data services (S3, Athena, Lambda, etc.). · Experience with Apache Airflow for workflow orchestration. · Strong proficiency in SQL for data extraction, transformation, and analysis. · Familiarity with data modeling concepts and data lake/data warehouse architectures. · Experience with version control systems (e.g., Git) and CI/CD processes. · Ability to write clean, scalable, and production-grade code. Benefits Company standard benefits.
Hyderabad, Telangana, India
None Not disclosed
On-site
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Summary: We are seeking a skilled ETL Data Engineer to design, build, and maintain efficient and reliable ETL pipelines, ensuring seamless data integration, transformation, and delivery to support business intelligence and analytics. The ideal candidate should have hands-on experience with ETL tools like Talend , strong database knowledge, and familiarity with AWS services . Key Responsibilities Design, develop, and optimize ETL workflows and data pipelines using Talend or similar ETL tools. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Integrate data from various sources, including databases, APIs, and cloud platforms, into data warehouses or data lakes. Create and optimize complex SQL queries for data extraction, transformation, and loading. Manage and monitor ETL processes to ensure data integrity, accuracy, and efficiency. Work with AWS services like S3, Redshift, RDS, and Glue for data storage and processing. Implement data quality checks and ensure compliance with data governance standards. Troubleshoot and resolve data discrepancies and performance issues. Document ETL processes, workflows, and technical specifications for future reference. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 4+ years of experience in ETL development, data engineering, or data warehousing. Hands-on experience with Talend or similar ETL tools (Informatica, SSIS, etc.). Proficiency in SQL and strong understanding of database concepts (relational and non-relational). Experience working in an AWS environment with services like S3, Redshift, RDS, or Glue. Strong problem-solving skills and ability to troubleshoot data-related issues. Knowledge of scripting languages like Python or Shell scripting is a plus. Good communication skills to collaborate with cross-functional teams. Benefits As per company standards.
Gurgaon, Haryana, India
None Not disclosed
On-site
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Senior Specialist Cloud Engineer - Contact Centre Innovation & GenAI Role Summary We are seeking an experienced and highly skilled Senior Specialist Cloud Engineer to join our innovative team. In this role, you will be responsible for designing, implementing, and maintaining cloud-based solutions using cutting-edge technologies. You will play a crucial role in optimizing our cloud infrastructure, improving system performance, and ensuring the scalability and reliability of our applications. What you will do: (Roles & Responsibilities) Design and implement complex cloud-based solutions using AWS services Design and optimize database schemas and queries, particularly with DynamoDB Write, test, and maintain high-quality Python code for cloud-based applications Able to work on Amazon Connect and integrate Amazon services Collaborate with cross-functional teams to identify and implement cloud-based solutions Ensure security, compliance, and best practices in cloud infrastructure Troubleshoot and resolve complex technical issues in cloud environments Mentor junior engineers and contribute to the team's technical growth Stay up-to-date with the latest cloud technologies and industry trends Requirements What you need to succeed: (MUST Haves) Bachelor's degree in Computer Science, Engineering, or a related field 5-9 years of experience in cloud engineering, with a strong focus on AWS Extensive experience with Python programming and software development Strong knowledge of database systems, particularly DynamoDB Hands On experience in Amazon Connect Excellent problem-solving and analytical skills Strong communication and collaboration abilities Ideal Candidate Will Also Have Experience with containerization technologies (e.g., Docker, Kubernetes) Knowledge of CI/CD pipelines and DevOps practices Familiarity with serverless architectures and microservices Experience with data analytics and big data technologies Understanding of machine learning and AI concepts Contributions to open-source projects or technical communities AWS certifications (e.g., Solutions Architect, DevOps Engineer) are a plus Experience mentoring junior engineers or leading small teams Strong project management skills and ability to manage multiple priorities If you are passionate about cloud technologies, have a proven track record of delivering innovative solutions, and thrive in a collaborative environment, we want to hear from you. Join our team and help shape the future of cloud computing! Benefits As per company standards.
Hyderabad, Telangana, India
None Not disclosed
Remote
Full Time
About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Primary Responsibilities Installing, configuring, and troubleshooting of all Windows and mac OS. Securing the network by installing & troubleshooting Antivirus related issues & regularly updating the Antivirus Configuring and Troubleshooting Local & Network Printers. Resolving hardware related issues in printers and other peripherals. Providing the admin rights, Remote desktop access, file and folder access to the users as per the request. Trouble shooting all end-to-end technical problems through the remote tools. Monitoring the compliance status of all the desktops/servers in terms of patch/DAT’s status. Troubleshooting issues on Office365 and escalate to proper team Identifying and solving issues on Microsoft products (EXCEL, POWEPOINT, WORD, TEAMS) Troubleshooting issues on Citrix connections, Client VPN’s. Add devices to AzureAD, create, deploy and managing Intune MDM. Create, deploy, manage the app protection policies, device configuration policies from Intune endpoint manager. Create and manage security firewall, Switches, and ILL. Manage and update McAfee web controls and firewall rules. Maintain and monitor CCTV camera’s, Access control eSSL. Identify the causes of networking problems, using diagnostic testing software and equipment. Resolve IT tickets regarding computer software, hardware, and application issues on time. Set up equipment for employee use, performing or ensuring proper installation of cables, operating systems, or appropriate software. Install and perform minor repairs to hardware, software, or peripheral equipment. Requirements Requirements Good Experience in System administration Technical Support Experience Experience in ITIL process Experience in RIM (Remote Infrastructure Mgmt.) Good knowledge in Virtualization and cloud concepts with VMware and/or Open stack Excellent communication skills Benefits Standard Company Benefits
Hyderabad, Telangana, India
None Not disclosed
Remote
Full Time
We are seeking a highly experienced and hands-on Lead/Senior Data Engineer to architect, develop, and optimize data solutions in a cloud-native environment. The ideal candidate will have 7–12 years of strong technical expertise in AWS Glue, PySpark, and Python, along with experience designing robust data pipelines and frameworks for large-scale enterprise systems. Prior exposure to the financial domain or regulated environments is a strong advantage. Key Responsibilities: Solution Architecture: Design scalable and secure data pipelines using AWS Glue, PySpark, and related AWS services (EMR, S3, Lambda, etc.) Leadership & Mentorship: Guide junior engineers, conduct code reviews, and enforce best practices in development and deployment. ETL Development: Lead the design and implementation of end-to-end ETL processes for structured and semi-structured data. Framework Building: Develop and evolve data frameworks, reusable components, and automation tools to improve engineering productivity. Performance Optimization: Optimize large-scale data workflows for performance, cost, and reliability. Data Governance: Implement data quality, lineage, and governance strategies in compliance with enterprise standards. Collaboration: Work closely with product, analytics, compliance, and DevOps teams to deliver high-quality solutions aligned with business goals. CI/CD Automation: Set up and manage continuous integration and deployment pipelines using AWS CodePipeline, Jenkins, or GitLab. Documentation & Presentations: Prepare technical documentation and present architectural solutions to stakeholders across levels. Requirements: Required Qualifications: 7–12 years of experience in data engineering or related fields. Strong expertise in Python programming with a focus on data processing. Extensive experience with AWS Glue (both Glue Jobs and Glue Studio/Notebooks). Deep hands-on experience with PySpark for distributed data processing. Solid AWS knowledge: EMR, S3, Lambda, IAM, Athena, CloudWatch, Redshift, etc. Proven experience in architecture and managing complex ETL workflows. Proficiency with Apache Airflow or similar orchestration tools. Hands-on experience with CI/CD pipelines and DevOps best practices. Familiarity with data quality, data lineage, and metadata management. Strong experience working in agile/scrum teams. Excellent communication and stakeholder engagement skills. Preferred/Good to Have: Experience in financial services, capital markets, or compliance systems. Knowledge of data modeling, data lakes, and data warehouse architecture. Familiarity with SQL (Athena/Presto/Redshift Spectrum). Exposure to ML pipeline integration or event-driven architecture is a plus. Benefits: Flexible work culture and remote options Opportunity to lead cutting-edge cloud data engineering projects Skill-building in large-scale, regulated environments.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.