Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 5.0 years
8 - 12 Lacs
Noida
Work from Office
MAQ LLC d.b.a MAQ Software hasmultiple openings at Redmond, WA for: Software Data Operations Engineer (BS+2) Responsible for gathering & analyzing business requirements from customers. Implement,test and integrate software applications for use by customers. Develop &review cost effective data architecture to ensure appropriateness with currentindustry advances in data management, cloud & user experience. Automateuser test scenarios, debug & fix errors in cloud-based data infrastructure,reporting applications to meet customer needs. Must be able to traveltemporarily to client sites and or relocate throughout the United States. Requirements:Bachelors Degree or foreign equivalent in Computer Science, ComputerApplications, Computer Information Systems, Information Technology or relatedfield with two years of work experience in job offered, software engineer, systemsanalyst or related job.
Posted 5 days ago
5.0 - 10.0 years
7 - 12 Lacs
Chennai
Work from Office
What You ll Need 5 years of experience in scripting languages such as Python, Javascript or Typescript Familiarity with low-code/no-code platforms such as Zapier Ability to adopt to or learn other languages such as XML, internal scripting languages, etc. Proven collaborator with multiple stakeholders, including operations, engineering, and data infrastructure Strong communication skills, high attention to detail and proven ability to use metrics to drive decisions A sense of ownership and a passion for delighting customers through innovation and creative solutions to complex problems About the Role We re seeking innovative problem-solvers with expertise in automation, scripting, and process optimization to help us scale and redefine the industry. If you thrive on collaboration and creating impactful solutions, come help us fix what s broken in real estate and transform the way people move. What You ll Do Support operating teams with building and maintaining scripted, automated solutions to minimize need for repetitive, manual effort; responsive to real-time, time-sensitive operational needs Partner with engineering team to build products and tools, as well as evolve existing ones; tools focus on automation and process optimization for listings Contribute to all phases of process and tool development including ideation, prototyping, design, production and testing; iterates on final product for continued improvement
Posted 5 days ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
As a Senior SRE at Triomics, you will: Architect, deploy, and manage robust, secure, and scalable infrastructure across AWS, Azure, and GCP Design and implement CI/CD pipelines using Jenkins to support rapid development and deployment cycles Orchestrate containers using Kubernetes and Docker for high-availability applications Implement Infrastructure as Code (IaC) using Terraform and Helm Set up and enforce secret management practices and security protocols across environments Automate workflows using Python and Bash Manage and optimize data infrastructure including PostgreSQL and Redis Administer Linux servers and handle in-depth troubleshooting Configure network setups and enforce security hardening techniques Deploy and monitor AI/ML workloads in production, ensuring performance and reliability Build monitoring and logging solutions using modern tools to support production-grade observability Support multi-tenant and single-tenant customer deployments with strong isolation and SLA guarantees Collaborate with engineering teams to define and maintain deployment workflows Write and maintain clear and comprehensive technical documentation and SOPs Requirements Minimum 6 years of experience in DevOps engineering Proven track record of 2+ years longevity in each prior role Strong experience in multi-cloud management (Azure, AWS, GCP) Solid background in Kubernetes, Docker, Jenkins, Terraform, Helm Deep understanding of security best practices and secret management Proficiency in Python and Bash scripting Experience with Linux system administration, network configuration, and security hardening Hands-on experience in PostgreSQL, Redis Demonstrated experience with monitoring, logging, and incident response systems Prior experience at a Y Combinator-backed startup or a similarly reputable startup is mandatory Experience deploying and scaling AI/ML workloads in production Excellent communication and documentation skills Software development experience is a significant plus Healthcare industry exposure is a significant plus
Posted 1 week ago
10.0 - 18.0 years
30 - 35 Lacs
Hyderabad
Remote
Role : Solution Architect Company : Feuji Software Solutions Pvt Ltd. Mode of Hire : Permanent Position Experience : 10+ Years Work Location : Hyderabad/ Remote About Feuji Feuji, established in 2014 and headquartered in Dallas, Texas, has rapidly emerged as a leading global technology services provider. With strategic locations including a Near Shore facility in San Jose, Costa Rica, and Offshore Delivery Centers in Hyderabad, and Bangalore, we are well-positioned to cater to a diverse clientele. Our team of 600 talented engineers drives our success, delivering innovative solutions to our clients and contributing to our recognition as a 'Best Place to Work For.' We collaborate with a wide range of clients, from startups to industry giants in sectors like Healthcare, Education, IT, and engineering, enabling transformative changes in their operations. Through partnerships with top technology providers such as AWS, Checkpoint, Gurukul, CoreStack, Splunk, and Micro Focus, we empower our clients' growth and innovation. With a clientele including Microsoft, HP, GSK, and DXC Technologies, we specialize in managed cloud services, cybersecurity, Product and Quality Engineering Services, and Data and Insights solutions, tailored to drive tangible business outcomes. Our commitment to creating 'Happy Teams' underscores our values and dedication to positive impact. Feuji welcomes exceptional talent to join our team, offering a platform for growth, development, and a culture of innovation and excellence. Key Responsibilities Design and implement scalable, secure, and resilient cloud solutions tailored to enterprise needs Architect hybrid solutions that integrate on-premises infrastructure with cloud services, focusing on seamless connectivity and data flow Develop and manage cloud networking solutions, including virtual networks, subnets, VPN gateways, ExpressRoute, and traffic management Ensure secure and optimized connectivity between on-premises environments and Azure cloud Implement and oversee cloud security best practices, including identity and access management (IAM), encryption, firewalls, and security monitoring Analyze and compare the cost implications of on-premises vs. cloud solutions Optimize resources to balance performance with cost-effectiveness, providing recommendations for cost-saving strategies Design and implement comprehensive disaster recovery (DR) plans, ensuring business continuity for enterprise applications Work closely with clients to understand their business requirements and translate them into technical solutions Provide strategic guidance on cloud adoption, migration, and optimization to senior stakeholders Lead technical workshops, training sessions, and presentations for clients and internal teams Oversee the end-to-end delivery of cloud solutions, ensuring projects are completed on time, within scope, and within budget Collaborate with cross-functional teams to ensure the successful deployment of solutions Develop and maintain comprehensive technical documentation, including architecture diagrams, configuration guides, and operational procedures Ensure all documentation is up-to-date and accessible to relevant stakeholders Skills Knowledge and Expertise Required Qualifications: 10+ years of Azure experience 5+ years of solution architecture experience Proven experience in designing and implementing enterprise-scale solutions Experience with on-premises infrastructure, cloud migration strategies, and cost optimization Experience in managing large-scale projects 5+ years of Kubernetes experience Data infrastructure experience Terraform experience Cloud certifications Excellent communication skills Strong multi-tasker Self starter Team player Preferred Qualifications: Consulting experience Azure, AWS and GCP Professional level certifications Kubernetes certifications (CKA, CKAD, CKS)
Posted 1 week ago
8.0 - 12.0 years
15 - 20 Lacs
Pune
Work from Office
We are looking for a highly experienced Lead Data Engineer / Data Architect to lead the design, development, and implementation of scalable data pipelines, data Lakehouse, and data warehousing solutions. The ideal candidate will provide technical leadership to a team of data engineers, drive architectural decisions, and ensure best practices in data engineering. This role is critical in enabling data-driven decision-making and modernizing our data infrastructure. Key Responsibilities: Act as a technical leader responsible for guiding the design, development, and implementation of data pipelines, data Lakehouse, and data warehousing solutions. Lead a team of data engineers, ensuring adherence to best practices and standards. Drive the successful delivery of high-quality, scalable, and reliable data solutions. Play a key role in shaping data architecture, adopting modern data technologies, and enabling data-driven decision-making across the team. Provide technical vision, guidance, and mentorship to the team. Lead technical design discussions, perform code reviews, and contribute to architectural decisions.
Posted 1 week ago
6.0 - 8.0 years
8 - 10 Lacs
Bengaluru
Work from Office
Primary Skills - Snowflake, DBT, AWS; Good to have Skills - Fivetran (HVR), Python Responsibilities: Design, develop, and maintain data pipelines using Snowflake, DBT, and AWS. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize and troubleshoot existing data workflows to ensure efficiency and reliability. Implement best practices for data management and governance. Stay updated with the latest industry trends and technologies to continuously improve our data infrastructure. Required Skills: Proficiency in Snowflake, DBT, and AWS. Experience with data modeling, ETL processes, and data warehousing. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Skills: Knowledge of Fivetran (HVR) and Python. Familiarity with data integration tools and techniques. Ability to work in a fast-paced and agile environment. Education: Bachelor's degree in Computer Science, Information Technology, or a related field.
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Business Systems Analyst The Business Systems Analyst (BSA) leads the definition of the solution for new client implementations or larger projects on an existing implementation. The BSA must be able to understand the clients business requirements and map those to our technology. Then document and help communicate that vision to the client and to internal execution teams. Candidates should have a strong grasp of database architecture, data modeling, Interface development and system integration using real-time web-services. He or she should also have a solid understanding of CRM, CDP, email and database marketing concepts. Principal Responsibilities : Lead project scoping: Gather and define project requirements Understand client workflows and business goals Elicit and comprehend use cases Learn existing technical and data infrastructure Conduct gap analysis between application and stated customer requirements Set expectations Think strategically to define solution recommendation: Collaborate with Architects and Developers Estimate project impact (resources / hours) Document recommended solution Support client team with presentation and review process Maintain Documentation Draft requirements documents/functional specifications Update changes throughout the project lifecycle Author and manage tickets for internal communication Contribute to successful execution and QA: Serve as internal SME on the solution Collaborate with development, QA and production support teams through project lifecycle Proactively identify and address project risks Support QA and UAT to ensure requirements are met Other Responsibilities: Become a product expert Manage multiple competing priorities through effective organization and communication Recommend and institute best practice and methodology and tools Provide guidance to client success team on technical capabilities, staffing and infrastructure needs Qualifications: Management experience of similar roles Experience contributing to project documentation including Business requirements documentation, specifications, SOWs, LOEs, etc Ability to understand and represent the needs of the end user in a software development environment Strong consultative and advisory skills. Excellent written and verbal communications. Strong MS Office skills (Word, Excel, PowerPoint). Ability to acknowledge marketing and strategic needs to assess and recommend technical requirements. Ability to communicate complex technical concepts to technical and non-technical audiences. Subject matter expert thought leader (supports organizations processes and procedures and can implement a new product or major modifications from start to finish). Web-services experience with RESTful APIs desired 5+ years of experience with software implementation from requirement through design, development, and user acceptance Bachelor's Degree or higher in technology-related field or relevant experience in implementing software.
Posted 2 weeks ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Design, develop, and maintain Ab Initio graphs for extracting, transforming, and loading (ETL) data from diverse sources to various target systems. Implement data quality and validation processes within Ab Initio. Data Modelling and Analysis. Collaborate with data architects and business analysts to understand data requirements and translate them into effective ETL processes. Analyse and model data to ensure optimal ETL design and performance. Ab Initio Components. . Utilize Ab Initio components such as Transform Functions, Rollup, Join, Normalize, and others to build scalable and efficient data integration solutions. Implement best practices for reusable Ab Initio components Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization. Conduct performance tuning and troubleshooting as needed. Collaboration. . Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes. Participate in design reviews and provide technical expertise to enhance overall solution quality Documentation
Posted 2 weeks ago
5 - 7 years
8 - 14 Lacs
Surat
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
5 - 7 years
8 - 14 Lacs
Bengaluru
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
8 - 13 years
20 - 35 Lacs
Delhi NCR, Gurgaon
Work from Office
Looking for a candidate as M. / Sr. M. in Data Engineering for a Aviation Company based out of Gurgaon Hands-on software/data engineering exp. in building large scalable and reliable enterprise technology platforms Interested candidate revert back Required Candidate profile *Excellent programming skills in Python, creation of responsive dashboards, data mining, handle large data sets. *Hands-on with one of the programming languages Scala/python.
Posted 2 months ago
3 - 8 years
0 - 1 Lacs
Navi Mumbai, Thane
Work from Office
Role & responsibilities As a Data Engineer at HDB Financial Ltd, He/She will be responsible for designing, developing, and maintaining our data pipelines, databases, and systems. You will work closely with cross-functional teams to ensure efficient data flow and support data-driven decision-making across the organization. Your main responsibilities will include: Designing and implementing scalable, reliable, and efficient data pipelines and ETL processes to collect, process, and store large volumes of structured and unstructured data from various sources. Collaborating with data scientists, analysts, and other stakeholders to understand data requirements and ensure data quality, integrity, and availability. Building and optimizing data warehouses, data lakes, and other storage solutions to support analytics, reporting, and machine learning initiatives. Developing and maintaining data infrastructure and tools, automating processes, and implementing best practices for data management, governance, and security. Troubleshooting issues, identifying bottlenecks, and optimizing performance of data systems to meet business needs and performance goals. Manage the end-to-end implementation of the Data Warehouse/Datalake project, ensuring adherence to timelines, and quality standards. Working closely with the development team to design and implement appropriate data models, ETL processes, and reporting solutions. Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience (3+ yrs and max 5-to-6 years) working as a Data Engineer or similar role, handling large-scale data infrastructure and ETL processes. Proficiency in programming languages such as Python, Java, or SQL for data processing and manipulation. Strong knowledge of database technologies (SQL, Redshift) data warehousing concepts, and big data tools/frameworks (PySpark, Kafka, etc.). Excellent communication and interpersonal skills with the ability to convey technical concepts to non-technical stakeholders. Experience with cloud platforms like AWS or Google Cloud and their data services (S3, Redshift, , etc.). Candidate is form BFSI industry is a plus. Please Note: Kindly answer all the below mention questions.Also upload the updated cv.
Posted 2 months ago
8 - 10 years
40 - 55 Lacs
Noida
Work from Office
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes, with team handling. Responsibilities: Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes. Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts. Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes. Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency. Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements. Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms). Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes. (Immediate Joiners)
Posted 2 months ago
3 - 5 years
8 - 14 Lacs
Hyderabad
Work from Office
Profile : We are looking for an experienced and high-energy ML Ops Engineer. The primary function of this role is to design enterprise architecture. Envision and drive solution architecture after hearing the product s vision and user stories with ability to envision and drive a proactive architectural roadmap for an existing product keeping in mind the future requirements. Requirements : - Experience building end-to-end systems as a Platform Engineer, MLOps Engineer, or Data Engineer (or equivalent). - Hands-on expertise in Python and ML frameworks. - Expertise with Linux administration. - Experience working with cloud computing and database systems. - Experience building custom integrations between cloud-based systems using APIs. - Experience developing and maintaining ML systems built with open source tools. - Experience developing with containers and Kubernetes in cloud computing environments. - Familiarity with one or more data-oriented workflow orchestration frameworks (KubeFlow, Airflow, Argo, etc.). - Ability to translate business needs to technical requirements. - Strong understanding of software testing, benchmarking, and continuous integration. - Exposure to machine learning methodology and best practices. - Experience with Prometheus and Grafana integrations for highly scalable environments. Responsibilities : - Design the data pipelines and engineering infrastructure to support enterprise machine learning systems at scale. - Take offline models data scientists build and turn them into a real machine learning production system. - Develop and deploy scalable tools and services to handle machine learning training and inference. - Identify and evaluate new technologies to improve performance, maintainability, and reliability of machine learning systems. - Apply software engineering rigor and best practices to machine learning, including CI/CD, automation, etc. - Support model development, with an emphasis on auditability, versioning, and data security. - Facilitate the development and deployment of proof-of-concept machine learning systems.
Posted 3 months ago
7 - 12 years
9 - 14 Lacs
Hyderabad
Work from Office
The Opportunity: (Brief Overview of the Role) We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in designing, building, and maintaining our data infrastructure and systems. You will collaborate with cross functional teams to ensure efficient and reliable data pipelines, implement data integration solutions, and support data driven initiatives. This position requires strong technical expertise, problem solving skills, and a deep understanding of data engineering principles. Roles & Responsibilities: Design and Develop Data Infrastructure: Architect, build, and optimize scalable data infrastructure, including data warehouses, data lakes, and ETL/ELT processes. Data Integration: Implement efficient and reliable data integration pipelines, ensuring the smooth flow of data from various sources to target systems. Data Modeling and Schema Design: Design and implement data models and schemas that support business requirements and optimize data retrieval and analysis. Data Quality and Governance: Develop and implement data quality checks, data validation processes, and data governance practices to maintain data integrity and accuracy. Performance Optimization: Identify performance bottlenecks, optimize data processing and storage systems, and improve overall data pipeline efficiency. Data Security: Ensure data privacy and security by implementing appropriate access controls, encryption techniques, and data protection measures. Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand their data needs and provide technical support. Documentation and Monitoring: Create and maintain technical documentation, data dictionaries, and monitoring systems to track data pipeline performance and identify issues. Troubleshooting and Issue Resolution: Investigate and resolve data related issues, including data quality issues, system failures, and performance degradation. Data Architecture and Technology Evaluation: Stay updated with industry trends and emerging technologies in data engineering, and evaluate their potential impact and suitability for the organization 3 8 years of experience in data engineering, data integration, or related roles. Strong proficiency in database management/development and working experience with relational databases, data modeling, and schema design. Proficiency in data integration tools, ETL/ELT frameworks, and data pipeline orchestration tools. Strong programming skills in languages such as Python/Java. Understanding of data warehousing concepts, dimensional modeling, and data governance practices. Knowledge of data security and privacy principles, as well as experience implementing data security measures. Working experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Glue, Azure Data Factory, GCP Dataflow)
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2