Jobs
Interviews
4 Job openings at ixceed
Python Architect

Chennai, Tamil Nadu

0 - 5 years

INR 35.0 - 50.0 Lacs P.A.

Work from Office

Not specified

Role: Python Architect Experience: 10+ Years Location: Chennai Notice Period: 30-60 days Job Description: Job Responsibilities: At least 8+ years of SW development experience in Java or Python , ReactJS, Docker and Kubernetes preferably in the telecom domain Able to articulate customer requirements into high-level/low-level specification documents. Able to guide and mentor the team of 6-8 people from the technical side and strictly adhere to the process guidelines during the implementation phase of the project. Develop and design python programs, flask based micro services based on customer requirements. Strong knowledge and good experience required on Advanced Python programming is must. Experience in the development of high-performance requirement python projects is a must Good to have knowledge in ELK/Grafana Good knowledge in Kubernetes, Dockers, and Cloud native principles. Able to communicate directly with customers and can participate in direct technical discussions with the customer Bachelor or Master's degree in CSE/ECE/EEE/EI/IT Experience in working on Agile projects is value addition Knowledge and experience with the professional services project lifecycle (scoping, requirements, construction, QA/test) Design, implementation, test, integration and debugging of Python based micro services & applications Convert requirements to high-quality code while working closely with a team of other highly skilled professionals to deliver top-quality software to the Eden NET customer base. Proven commercial Python development experience (more than just scripting), Excellent understanding of Object-Oriented Methodology and design, implementation, and debugging skills. Understanding /Hands on Experience on Machine learning and important Algorithms Proven experience working with relational database systems such as MySQL Experience writing automated unit and integration testing using Python Experience using and creating RESTful APIs Good experience in continuous refactoring First level of effort estimation for feature, Interface with system architects to understand the impact of the system level features on the modules. Reviews specifications, architecture and design, code, test strategy, and test cases for the feature. Work towards continuously improving the quality of the code and automation of the test cases. Acts as a mentor for team members, Good communication and teamwork skills Ability to take initiative and lead complex activities along with a team Experience in working in a multi-cultural environment Highly motivated and constantly seeks avenues for continuous improvement Job Type: Permanent Pay: ₹3,500,000.00 - ₹5,000,000.00 per year Benefits: Paid sick time Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Experience: Java & Python : 8 years (Required) ReactJS: 7 years (Required) Dockers and Kubernetes : 5 years (Required) Telecom Domain: 5 years (Required) Advanced python : 5 years (Required) Machine learning and important Algorithms: 4 years (Required) MySQL: 1 year (Required) RESTful APIs: 5 years (Required) Work Location: In person

Data Engineer (AWS)

Haryana, Haryana

8 years

INR Not disclosed

Remote

Not specified

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Work Location: Hybrid remote in Haryana, Haryana

Data Engineer (AWS)

Haryana, Haryana

0 - 4 years

INR 30.0 - 38.0 Lacs P.A.

Remote

Not specified

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 6 years (Required) AWS: 4 years (Required) Python: 4 years (Required) Work Location: Hybrid remote in Haryana, Haryana

Gen AI Engineer

Delhi District, Delhi

0 - 3 years

INR 20.0 - 30.0 Lacs P.A.

On-site

Not specified

Role: Gen AI Engineer Location: Delhi Mode of Work : Hybrid Notice Period : 0-25 Days Job Description: Key Responsibilities: Designing and Developing AI models: This includes creating architectures, algorithms, and frameworks for generative AI. Implementing AI models: This involves building and integrating AI models into existing systems and applications. Working with LLMs and other AI technologies: This includes using tools and techniques like LangChain, Haystack, and prompt engineering. Data preprocessing and analysis: This involves preparing data for use in AI models. Collaborating with other teams: This includes working with data scientists, product managers, and other stakeholders. Testing and deploying AI models: This involves evaluating model performance and deploying them to production environments. Monitoring and optimizing AI models: This involves tracking model performance, identifying issues, and optimizing models for better results. Staying up to date with the latest advancements in Gen AI: This includes learning about new techniques, models, and frameworks. Required Skills: Strong programming skills in Python: Python is the preferred language for AI development. Knowledge of Generative AI, NLP, and LLMs: This includes understanding the principles behind these technologies and how to use them effectively. Experience with RAG pipelines and vector databases: This includes understanding how to build and use retrieval-augmented generation pipelines. Familiarity with AI frameworks and libraries: This includes knowledge of frameworks like LangChain, Haystack, and open-source libraries. Understanding of prompt engineering and tokenization: This includes understanding how to optimize prompts and manage tokenization. Experience in integrating and fine-tuning AI models: This includes knowledge of deploying and maintaining AI models in production environments. Excellent communication and problem-solving skills: This includes the ability to communicate complex technical concepts to non-technical stakeholders. Optional Skills: Experience with cloud computing platforms (GCP, AWS, Azure): This can be helpful for deploying and managing AI models. Familiarity with MLOps practices: This can help with building and deploying AI models in a scalable and reliable manner. Experience with DevOps practices: This can help with automating the development and deployment of AI models. Job Type: Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Schedule: Day shift Experience: Total: 6 years (Required) GenAI: 5 years (Required) Python : 3 years (Required) LLM: 4 years (Required) OpenAI, Claude, Gemini: 3 years (Preferred) Azure: 3 years (Required) Work Location: In person

ixceed logo

ixceed

4 Jobs

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview