Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 - 0 Lacs
hyderabad, telangana
On-site
You will be joining QTek Digital, a leading data solutions provider known for its expertise in custom data management, data warehouse, and data science solutions. Our team of dedicated data professionals, including data scientists, data analysts, and data engineers, collaborates to address present-day challenges and pave the way for future innovations. At QTek Digital, we value our employees and focus on fostering engagement, empowerment, and continuous growth opportunities. As a BI ETL Engineer at QTek Digital, you will be taking on a full-time remote position. Your primary responsibilities will revolve around tasks such as data modeling, applying analytical skills, implementing data warehouse solutions, and managing Extract, Transform, Load (ETL) processes. This role demands strong problem-solving capabilities and the capacity to work autonomously. To excel in this role, you should ideally possess: - 6-9 years of hands-on experience in ETL and ELT pipeline development using tools like Pentaho, SSIS, FiveTran, Airbyte, or similar platforms. - 6-8 years of practical experience in SQL and other data manipulation languages. - Proficiency in Data Modeling, Dashboard creation, and Analytics. - Sound knowledge of data warehousing principles, particularly Kimball design. - Bonus points for familiarity with Pentaho and Airbyte administration. - Demonstrated expertise in Data Modeling, Dashboard design, Analytics, Data Warehousing, and ETL procedures. - Strong troubleshooting and problem-solving skills. - Effective communication and collaboration abilities. - Capability to operate both independently and as part of a team. - A Bachelor's degree in Computer Science, Information Systems, or a related field. This position is based in our Hyderabad office, offering an attractive compensation package ranging from INR 5-19 Lakhs, depending on various factors such as your skills and prior experience. Join us at QTek Digital and be part of a dynamic team dedicated to shaping the future of data solutions.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
You will be joining Teradata, a company that believes in empowering individuals with better information through its cloud analytics and data platform for AI. By providing harmonized data, trusted AI, and faster innovation, Teradata enables customers and their clients to make more informed decisions across various industries. As a part of the team, your responsibilities will include designing, developing, and maintaining scalable enterprise applications, data processing, and engineering pipelines. You will write efficient, scalable, and clean code primarily in Go (Golang), Java, or Python. Collaborating with cross-functional teams, you will define, design, and implement new features while ensuring the availability, reliability, and performance of deployed applications. Integrating with CI/CD pipelines will be crucial for seamless deployment and development cycles. Monitoring and optimizing application performance, troubleshooting issues, evaluating, investigating, and optimizing application performance, as well as resolving customer incidents and supporting Customer Support and Operations teams are also part of your role. You will work with a high-performing engineering team that values innovation, continuous learning, and open communication. The team focuses on mutual respect, empowering members, celebrating diverse perspectives, and fostering professional growth. This Individual Contributor role reports to the Engineering Manager. To be qualified for this role, you should have a Tech/M. Tech/MCA/MSc degree in CSE/IT or related disciplines, along with 3-5 years of relevant industry experience. Expertise in SQL and either Java or Golang is essential, as well as experience with Python, REST API in Linux environments, and working in public cloud environments like AWS, Azure, or Google Cloud. Excellent communication and teamwork skills are also required. Preferred qualifications include experience with containerization (Docker) and orchestration tools (Kubernetes), modern data engineering tools such as Airbyte, Airflow, and dbt, good knowledge of Java/Python and development experience, familiarity with Teradata database, proactive and solution-oriented mindset, passion for technology and continuous learning, ability to work independently while contributing to the team's success, creativity, adaptability, a strong sense of ownership, accountability, and a drive to make an impact. Teradata prioritizes a people-first culture, offering a flexible work model, focusing on well-being, and being an anti-racist company dedicated to fostering a diverse, equitable, and inclusive environment that values individuals for who they are.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for developing scalable web applications using Python (FastAPI), React.js, and cloud-native technologies. Specifically, you will work on building a low-code/no-code AI agent platform, designing an intuitive workflow UI, and integrating with LLMs, enterprise connectors, and role-based access controls. As a Full-Stack Developer, your responsibilities will include developing and optimizing APIs using FastAPI, integrating with LangChain, Pinecone/Weaviate vector databases, and enterprise connectors like Airbyte/Nifi for backend development. For frontend development, you will build an interactive drag-and-drop workflow UI using React.js along with supporting libraries like React Flow, D3.js, and TailwindCSS. You will also be tasked with implementing authentication mechanisms such as OAuth2, Keycloak, and role-based access controls for multi-tenant environments. Database design will involve working with PostgreSQL for structured data, MongoDB for unstructured data, and Neo4j for knowledge graphs. Your role will extend to DevOps and deployment using Docker, Kubernetes, and Terraform across various cloud platforms like Azure, AWS, and GCP. Performance optimization will be crucial as you strive to enhance API performance and frontend responsiveness for an improved user experience. Collaboration with AI and Data Engineers will be essential to ensure seamless integration of AI models. To excel in this role, you should have at least 5 years of experience in FastAPI, React.js, and cloud-native applications. A strong understanding of REST APIs, GraphQL, and WebSockets is required. Experience with JWT authentication, OAuth2, and multi-tenant security is essential. Proficiency in databases such as PostgreSQL, MongoDB, Neo4j, and Redis is expected. Knowledge of workflow automation tools like n8n, Node-RED, and Temporal.io will be beneficial. Familiarity with containerization tools (Docker, Kubernetes) and CI/CD pipelines is preferred. Any experience with Apache Kafka, WebSockets, or AI-driven chatbots would be considered a bonus.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Full-Stack Developer with 5+ years of experience in building scalable web applications using Python (FastAPI), React.js, and cloud-native technologies. In this role, you will be responsible for developing a low-code/no-code AI agent platform, implementing an intuitive workflow UI, and integrating with LLMs, enterprise connectors, and role-based access controls. Your responsibilities will include backend development where you will develop and optimize APIs using FastAPI, integrating with LangChain, vector databases (Pinecone/Weaviate), and enterprise connectors (Airbyte/Nifi). Additionally, you will work on frontend development to build an interactive drag-and-drop workflow UI using React.js (React Flow, D3.js, TailwindCSS). You will also be involved in implementing OAuth2, Keycloak, and role-based access controls (RBAC) for multi-tenant environments. Database design is a crucial part of this role, where you will work with PostgreSQL (structured data), MongoDB (unstructured data), and Neo4j (knowledge graphs). DevOps & Deployment tasks will involve deploying using Docker, Kubernetes, and Terraform across multi-cloud (Azure, AWS, GCP) to ensure smooth operations. Performance optimization is another key area where you will focus on improving API performance and optimizing frontend responsiveness for seamless user experience. Collaboration with AI & Data Engineers is essential, as you will work closely with the Data Engineering team to ensure smooth AI model integration. To be successful in this role, you are required to have 5+ years of experience in FastAPI, React.js, and cloud-native applications. Strong knowledge of REST APIs, GraphQL, and WebSockets is essential, along with experience in JWT authentication, OAuth2, and multi-tenant security. Additionally, proficiency in PostgreSQL, MongoDB, Neo4j, and Redis is expected. Knowledge of workflow automation tools (n8n, Node-RED, Temporal.io), familiarity with containerization (Docker, Kubernetes), and CI/CD pipelines is also required. Bonus skills include experience in Apache Kafka, WebSockets, or AI-driven chatbots.,
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
We are looking for a highly motivated and experienced AWS Engineer who possesses AWS cloud experience and a strong desire to stay updated on the latest cloud development best practices. As an AWS Engineer, your primary responsibility will be to identify requirements and develop top-notch cloud-native solutions that are repeatable, scalable, and well-governed. You will be tasked with deploying and thoroughly testing solutions to ensure their robustness and security. Additionally, you will be accountable for creating and managing diagrams related to the solutions deployed in production. Key Requirements: - Designing and developing RESTful services. - Building serverless applications in AWS. - Constructing real-time/streaming data pipelines. - 3-4 years of SQL & Python programming experience. - 2-3 years of experience with various AWS technologies such as Glue, Redshift, Kinesis, Athena, CloudTrail, CloudWatch, Lambda, API Gateway, Step functions, SQS, S3, IAM roles, Secrets Manager. - Proficiency in ETL Tools like Glue, Fivetran, Talend, Matillion, etc. - 1-2 years of experience in DBT with Data Modeling, SQL, Jinja templating, and packages/macros for building robust data transformation pipelines. - Experience with Airbyte for building ingestion modules and CDC mechanisms. - Hands-on experience in distributed architecture systems handling large data volumes. - Strong problem-solving skills and ability to work independently. - Knowledge of Big Data Design Patterns, NoSQL databases, and cloud-based data transformation technologies. - Understanding of object-oriented design principles and enterprise integration patterns. - Familiarity with messaging middleware and building cloud-based applications. - Strong collaboration, communication, and self-driven work ethic. - Proficient in writing clean and effective code. Preferred Skills: - AWS Cloud Certifications. - Experience with Airflow, MWAA, Jinja templating in Python. - Knowledge of DevOps methodologies and CI/CD pipeline design. - Familiarity with Pyspark, DevOps, SQL, Python, and PySpark. - Experience in building Real-Time streaming data pipelines with Kafka, Kinesis. - Understanding of Data warehousing, Data Lake solutions, and Azure DE. - Ability to create and maintain scalable AWS architecture. - Collaboration with technical teams on modern architectures like Microservices, REST APIs, DynamoDB, Lambda, API Gateway. - Developing API-based, CDC, batch, and real-time data pipelines for structured and unstructured datasets. - Integration with third-party systems, ensuring repeatability and scalability. - Gathering requirements, developing solutions, and deploying them with development teams. - Providing comprehensive solution documentation and collaborating with data professionals. - Prioritizing data protection and cloud security in all aspects. If you do not meet all the requirements listed but believe you have unique skills to offer, we encourage you to apply for this role as there may be a suitable opportunity for you in the future.,
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Chennai
Remote
Who We Are For 20 years, we have been working with organizations large and small to help solve business challenges through technology. We bring a unique combination of engineering and strategy to Make Data Work for organizations. Our clients range from the travel and leisure industry to publishing, retail and banking. The common thread between our clients is their commitment to making data work as seen through their investment in those efforts. In our quest to solve data challenges for our clients, we work with large enterprise, cloud-based and marketing technology suites. We have a deep understanding of these solutions so we can help our clients make the most of their investment in an efficient way to have a data-driven business. Softcrylic now joins forces with Hexaware to Make Data Work in bigger ways! Why Work at Softcrylic? Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients. Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing) Here's how to approach the interview: All technical interview rounds will be conducted virtually. The final round will be a face-to-face interview with HR in Chennai. However, there will be a 15-minute technical assessment/in-person technical discussion as part of the final round. Make sure to prepare accordingly for both virtual and in-person components. Job Description: 5 + years of experience in working as Data Engineer Experience in migrating existing datasets from Big Query to Databricks using Python scripts. Conduct thorough data validation and QA to ensure accuracy, completeness, parity, and consistency in reporting. Monitor the stability and status of migrated data pipelines, applying fixes as needed. Migrate data pipelines from Airflow to Airbyte/Dagster based on provided frameworks. Develop Python scripts to facilitate data migration and pipeline transformation. Perform rigorous testing on migrated data and pipelines to ensure quality and reliability. Required Skills: Strong experience in working on Python for scripting Good experience in working on Data Bricks and Big Query Familiarity with data pipeline tools such as Airflow, Airbyte, and Dagster. Strong understanding of data quality principles and validation techniques. Ability to work collaboratively with cross-functional teams. Dinesh M dinesh.m@softcrylic.com +9189255 18191
Posted 1 month ago
5.0 - 10.0 years
17 - 30 Lacs
Hyderabad
Remote
At Mitratech, we are a team of technocrats focused on building world-class products that simplify operations in the Legal, Risk, Compliance, and HR functions of Fortune 100 companies. We are a close-knit, globally dispersed team that thrives in an ecosystem that supports individual excellence and takes pride in its diverse and inclusive work culture centered around great people practices, learning opportunities, and having fun! Our culture is the ideal blend of entrepreneurial spirit and enterprise investment, enabling the chance to move at a rapid pace with some of the most complex, leading-edge technologies available. Given our continued growth, we always have room for more intellect, energy, and enthusiasm - join our global team and see why it's so special to be a part of Mitratech! Job Description We are seeking a highly motivated and skilled Analytics Engineer to join our dynamic data team. The ideal candidate will possess a strong background in data engineering and analytics, with hands-on experience in modern analytics tools such as Airbyte, Fivetran, dbt, Snowflake, Airflow, etc. This role will be pivotal in transforming raw data into valuable insights, ensuring data integrity, and optimizing our data infrastructure to support the organization's data platform. Essential Duties & Responsibilities Data Integration and ETL Processes: Design, implement, and manage ETL pipelines using tools like Airbyte and Fivetran to ensure efficient and accurate data flow from various sources into our Snowflake data warehouse. Maintain and optimize existing data integration workflows to improve performance and scalability. Data Modeling and Transformation: Develop and maintain data models using dbt / dbt Cloud to transform raw data into structured, high-quality datasets that meet business requirements. Ensure data consistency and integrity across various datasets and implement data quality checks. Data Warehousing: Manage and optimize our Redshift / Snowflake data warehouses, ensuring it meets performance, storage, and security requirements. Implement best practices for data warehouse management, including partitioning, clustering, and indexing. Collaboration and Communication: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions that meet their needs. Communicate complex technical concepts to non-technical stakeholders in a clear and concise manner. Continuous Improvement: Stay updated with the latest developments in data engineering and analytics tools, and evaluate their potential to enhance our data infrastructure. Identify and implement opportunities for process improvements, automation, and optimization within the data pipeline. Requirements & Skills: Education and Experience: Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. 3-5 years of experience in data engineering or analytics engineering roles. Experience in AWS and DevOps is a plus. Technical Skills: Proficiency with modern ETL tools such as Airbyte and Fivetran. Must have experience with dbt for data modeling and transformation. Extensive experience working with Snowflake or similar cloud data warehouses. Solid understanding of SQL and experience writing complex queries for data extraction and manipulation. Familiarity with Python or other programming languages used for data engineering tasks. Analytical Skills: Strong problem-solving skills and the ability to troubleshoot data-related issues. Ability to understand business requirements and translate them into technical specifications. Soft Skills: Excellent communication and collaboration skills. Strong organizational skills and the ability to manage multiple projects simultaneously. Detail-oriented with a focus on data quality and accuracy. We are an equal-opportunity employer that values diversity at all levels. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity, disability, or veteran status.
Posted 1 month ago
13.0 - 20.0 years
40 - 45 Lacs
Bengaluru
Work from Office
Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough